gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
dict
paper_headers
dict
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-67#paper-1143#slide-7
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-7
What Do We Know Now
Neural reranking improves subjective quality of machine translation output. Main gains are from grammatical factors, and not lexical selection.
Neural reranking improves subjective quality of machine translation output. Main gains are from grammatical factors, and not lexical selection.
[]
GEM-SciDuet-train-67#paper-1143#slide-8
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-8
What Do We Still Not Know Yet
Neural Reranking Improves Subjective Quality of Machine Translation How do neural translation models compare with neural language models? How does reranking compare with pure neural MT?
Neural Reranking Improves Subjective Quality of Machine Translation How do neural translation models compare with neural language models? How does reranking compare with pure neural MT?
[]
GEM-SciDuet-train-68#paper-1147#slide-0
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-0
ML as an engineering discipline
A mature engineering discipline should be able to predict the cost of a project before it starts Collecting/producing training data is typically the most expensive part of an ML or NLP project We usually have only the vaguest idea of how accuracy is related to training data size and quality I More data produces better accuracy I Higher quality data (closer domain, less noise) produces I But we usually have no idea how much data or what quality of data is required to achieve a given performance goal Imagine if engineers designed bridges the way we build systems! See statistical power analysis for experimental design, e.g., Cohen (1992)
A mature engineering discipline should be able to predict the cost of a project before it starts Collecting/producing training data is typically the most expensive part of an ML or NLP project We usually have only the vaguest idea of how accuracy is related to training data size and quality I More data produces better accuracy I Higher quality data (closer domain, less noise) produces I But we usually have no idea how much data or what quality of data is required to achieve a given performance goal Imagine if engineers designed bridges the way we build systems! See statistical power analysis for experimental design, e.g., Cohen (1992)
[]
GEM-SciDuet-train-68#paper-1147#slide-1
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-1
Goals of this research project
Given desiderata (accuracy, speed, computational and data resource pricing, etc.) for an ML/NLP system, design for a system that meets these. Example: design a semantic parser for a target application domain that achieves 95% accuracy across a given range of queries. I What hardware/software should I use? I How many labelled training examples do I need? Idea: Extrapolate performance from small pilot data to predict performance on much larger data
Given desiderata (accuracy, speed, computational and data resource pricing, etc.) for an ML/NLP system, design for a system that meets these. Example: design a semantic parser for a target application domain that achieves 95% accuracy across a given range of queries. I What hardware/software should I use? I How many labelled training examples do I need? Idea: Extrapolate performance from small pilot data to predict performance on much larger data
[]
GEM-SciDuet-train-68#paper-1147#slide-2
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-2
What this paper contributes
Studies dierent methods for predicting accuracy on a full dataset from results on a small pilot dataset We propose new accuracy extrapolation task, provide results for the 9 extrapolation methods on 8 text corpora I Uses the fastText document classi er and corpora (Joulin Investigates three extrapolation models and three item weighting functions for predicting accuracy as a function of training data size I Easily inverted to estimate training size required to achieve a Highlights the importance of hyperparameter tuning and item weighting in extrapolation
Studies dierent methods for predicting accuracy on a full dataset from results on a small pilot dataset We propose new accuracy extrapolation task, provide results for the 9 extrapolation methods on 8 text corpora I Uses the fastText document classi er and corpora (Joulin Investigates three extrapolation models and three item weighting functions for predicting accuracy as a function of training data size I Easily inverted to estimate training size required to achieve a Highlights the importance of hyperparameter tuning and item weighting in extrapolation
[]
GEM-SciDuet-train-68#paper-1147#slide-3
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-3
Accuracy extrapolation task
Corpus Labels Train (K) Test (K) FastText document classi er Development ag_news dbpedia amazon_review_full yelp_review_polarity Evaluation amazon_review_polarity sogou_news yahoo_answers yelp_review_full I 4 development corpora I 4 evaluation corpora Goal: use pilot data to predict test accuracy when trained on full train data
Corpus Labels Train (K) Test (K) FastText document classi er Development ag_news dbpedia amazon_review_full yelp_review_polarity Evaluation amazon_review_polarity sogou_news yahoo_answers yelp_review_full I 4 development corpora I 4 evaluation corpora Goal: use pilot data to predict test accuracy when trained on full train data
[]
GEM-SciDuet-train-68#paper-1147#slide-4
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-4
Extrapolation on agnews corpus
Extrapolation with biased power-law model Pilot data binomial weights (n/e(1 e)) Extrapolation from training data is generally good Extrapolation from training data is poor unless hyperparameters are optimised at each subset of pilot data
Extrapolation with biased power-law model Pilot data binomial weights (n/e(1 e)) Extrapolation from training data is generally good Extrapolation from training data is poor unless hyperparameters are optimised at each subset of pilot data
[]
GEM-SciDuet-train-68#paper-1147#slide-5
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-5
Relative residuals on dev corpora
ag_news amazon_review_full dbpedia yelp_review_polarity
ag_news amazon_review_full dbpedia yelp_review_polarity
[]
GEM-SciDuet-train-68#paper-1147#slide-6
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-6
RMS relative residuals on test corpora
Pilot data amazon review polarity sogou news yahoo answers yelp review full Based on dev corpora results, use: I biased power law model (e(n) a bnc) I binomial item weights (n/e(1 e)) Evaluate extrapolations with RMS of relative residuals Larger pilot data smaller extrapolation error Optimise hyperparameters at each pilot subset smaller extrapolation error
Pilot data amazon review polarity sogou news yahoo answers yelp review full Based on dev corpora results, use: I biased power law model (e(n) a bnc) I binomial item weights (n/e(1 e)) Evaluate extrapolations with RMS of relative residuals Larger pilot data smaller extrapolation error Optimise hyperparameters at each pilot subset smaller extrapolation error
[]
GEM-SciDuet-train-68#paper-1147#slide-7
1147
Predicting accuracy on large datasets from smaller pilot data
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from systems trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict system accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78 ], "paper_content_text": [ "Introduction An engineering discipline should be able to predict the cost of a project before the project is started.", "Because training data is often the most expensive part of an NLP or ML project, it is important to estimate how much training data required for a system to achieve a target accuracy.", "Unfortunately our field only offers fairly impractical advice, e.g., that more data increases accuracy (Banko and Brill, 2001) ; we currently have no practical methods for estimating how much data or what quality of data is required to achieve a target accuracy goal.", "Imagine if bridge construction was planned the way we build our systems!", "Our long-term goal is to develop practical methods for designing systems that achieve target performance specifications, including identifying the amount of training data that the system will require.", "This paper starts to address this goal by introducing an extrapolation methodology that predicts a system's accuracy on a larger dataset from its performance on subsets of much smaller pilot data.", "These extrapolations allow us to estimate how much training data a system will require to achieve a target accuracy.", "We focus on a specific task (document classification) using a specific system (the fastText classifier of Joulin et al.", "(2016) ), and leave to future work to determine if our approach and results generalise to other tasks and systems.", "We introduce an accuracy extrapolation task that can be used to evaluate different extrapolation models.", "We describe three well-known extrapolation models and evaluate them on a document classification dataset.", "On our development data the biased power-law method with binomial item weighting performs best, so we propose it should be a baseline for future research.", "We demonstrate the importance of hyperparameter optimisation on each different-sized data subset (rather than just optimising on the largest data subset) and item weighting, and show that these can have a dramatic impact on extrapolation, especially from small pilot data sets.", "The data and code for all experiments in this paper, including the R code for the graphics, is available from http://web.", "science.mq.edu.au/˜mjohnson.", "Related work Power analysis (Cohen, 1992) is widely-used statistical technique (e.g., in biomedical trials) for predicting the number of measurements required in an experimental design; we aim to develop sim-ilar techniques for NLP and ML systems.", "There is a large body of research on the relationship between training data size and system performance.", "Geman et al.", "(1992) decompose the squared error of a model into a bias term (due to model errors) and a variance term (due to statistical noise).", "Bias does not vary with training data size n, but the error due to variance should decrease as O( 1 / √ n) if the training observations are independent (Domingos, 2000a,b) .", "The power-law models used in this paper have been investigated many times in prior literature (Haussler et al., 1996; Mukherjee et al., 2003; Figueroa et al., 2012; Beleites et al., 2013; Hajian-Tilaki, 2014; Cho et al., 2015) .", "Sun et al.", "(2017) , Barone et al.", "(2017) and the concurrent unpublished work by Hestness et al.", "(2017) point out that these power-law models describe modern ML and NLP systems quite well, including complex deep-learning systems, so we expect our results to generalise to these systems.", "This paper differs from prior work in that we explicitly focus on the task of extrapolating system performance from small pilot data.", "We introduce a new evaluation task to compare the effectiveness of different models for this extrapolation, and demonstrate the importance of per-subset hyperparameter optimisation and item weighting, which prior work did not investigate.", "Models for extrapolating pilot data We are given a system whose accuracy on a large dataset we wish to predict, but only a smaller pilot dataset is available.", "We train the system on different-sized subsets of the pilot dataset, and use the results of those training runs to estimate how the system's accuracy varies as a function of training data size.", "We focus on predicting the minimum error rate e(n) that the system can achieve on a dataset of size n after hyperparameter optimisation (where the error rate is 1−accuracy for a classifier) given a pilot dataset of size m n (in the task below, m = n /2 or m = n /10).", "We investigate three different extrapolation models of e(n) in this paper: • Power law:ê(n) = bn c • Inverse square-root:ê(n) = a + bn − 1 /2 • Biased power law:ê(n) = a + bn c Hereê(n) is the estimate of e(n), and a, b and c are adjustable parameters that are estimated based on the system's performance on the pilot dataset.", "Figure 1 : An extrapolation run from pilot data consisting of either 0.1 or 0.5 of the ag news corpus.", "The x-axis is the size of the subset of pilot data, while the y-axis is the classification error rate.", "The shapes/colors show the maximum fraction of the corpus used in the pilot data, and whether hyperparameters were optimised only once on all of the pilot data (e.g., = 0.1 and = 0.5) or at each smaller subset of the pilot data (e.g., ≤ 0.1 and ≤ 0.5).", "The lines are least-squares fits of biased power-law models (ê(n) = a + bn c ) to the corresponding pilot data.", "The red star shows minimum error rate when all the training data is used to train the classifier (this is the value we are trying to predict).", "The inverse square-root curve is what one would expect if the error is distributed according to a Bias-Variance decomposition (Geman et al., 1992) with a constant bias term a and a variance term that asymptotically follows the Central Limit Theorem.", "We fit these models using weighted least squares regression.", "Each data point or item in the regression is the result of a run of the system on a subset of the pilot dataset.", "Assuming that the underlying system has adjustable hyperparameters, the question arises: how should the hyperparameters be set?", "The computationally least demanding approach is to optimise the system's hyperparameters on the full pilot dataset, and use these hyperparameters for all the runs on subsets of the pilot dataset.", "An alternative, computationally more demanding approach is to optimise the system's hyperparameters separately on each of the subsets of the pilot dataset.", "Figure 1 shows an example where optimising the hyperparameters just on the full pilot dataset is clearly in-ferior to optimising the hyperparameters on each subset of the pilot dataset.", "We show below that the more demanding approach of optimising on each subset is superior, especially when extrapolating from small pilot datasets.", "We also investigate how details of the regression fit affect the regression accuracyê(n).", "We experimented with several link functions (we used the default Gaussian link here), but found that these had less impact than adjusting the item weights in the regression.", "Runs with smaller training sets presumably have higher variance, and since our goal is to extrapolate to larger datasets, it is reasonable to place more weight on items corresponding to larger datasets.", "We investigated three item weighting functions in regression: • constant weights (1), • linear weights (n), and • binomial weights ( n /e(1 − e)) Linear weights are motivated by the assumption that the item variance follows the Central Limit Theorem, while the binomial weights are motivated by the assumption that item variance follows a binomial distribution (see the Supplemental Materials for further discussion).", "As Figure 2 makes clear, linear weights and binomial weights generally produce more accurate extrapolations than constant weights, so we use binomial weights in our evaluation in Table 2 .", "A performance extrapolation task We used the fastText document classifier and the document classification corpora distributed with it; see Joulin et al.", "(2016) for full details.", "Fast-Text's speed and evaluation scripts make it easy to do the experiments described below.", "We fitted our extrapolation models to the fastText document classifier results on the 8 corpora distributed with the fastText classifier.", "These corpora contain labelled documents for a document classification task, and come randomised and divided into training and test sections.", "All our results are on these test sections.", "The corpora were divided into development and evaluation corpora (each with train and test splits) as shown in table 1.", "We use the amazon review polarity, sogou news, yahoo answers and yelp review full corpora as our test set (so these are only used in the final evaluation), while the ag news, dbpedia, amazon review full and yelp review polarity were used as development corpora.", "The development and evaluation sets contain document collections of roughly similar sizes and complexities, but no attempt was made to accurately \"balance\" the development and evaluation corpora.", "We trained the fastText classifier on 13 differently-sized prefixes of each training set that are approximately logarithmically spaced over two orders of magnitude (i.e., varying from 1 ⁄100 to all of the training corpus).", "To explore the effect of hyperparameter tuning on extrapolation, for each prefix of each training set we trained a classifier on each of 1,079 different hyperparameter settings, varying the n-gram length, learning rate, dimensionality of the hidden units and the loss function (the fastText classifier crashed on 17 hyperparameter combinations; we did not investigate why).", "We re-ran the entire process 8 times on randomlyshuffled versions of each training corpus.", "As expected, the minimum error configuration invariably requires the full training data.", "When extrapolating from subsets of a smaller pilot set (we explored pilot sets consisting of 0.1 and 0.5 of the full training data) there are two plausible ways of performing hyperparameter optimisation.", "Ideally, one would optimise the hyperparameters for each subset of the pilot data considered (we selected the best-performing hyperparameters using grid search).", "However, if one is not working with computationally efficient algorithms like fastText, one might be tempted to only optimise the hyperparameters once on all the pilot data, and use the hyperparameters optimised on all the pilot data when calculating the error rate on subsets of that pilot data.", "As figure 2 and table 2 make clear, selecting the optimal hyperparameters for each subset of the pilot data generally produces better extrapolation results.", "Figure 1 shows how different ways of choosing hyperparameters can affect extrapolation.", "As that figure shows, hyperparameters optimised on 50% of the training data perform very badly on 1% of the training data.", "As figure 2 shows, this can lead simpler extrapolation models such as the power-law to dramatically underestimate the error on the full dataset.", "Interestingly, more complex extrapolation models, such as the extended power-law model, often do much better.", "Based on the development corpora results presented in Figures 1 and 2 , we choose the biased power law model (ê(n) = a + bn c ) with binomial Table 2 shows that extrapolation is more accurate from larger pilot datasets; increasing the size of the pilot dataset 5 times re-duces the RMS relative residuals by a factor of 10.", "It also clearly shows that it valuable to perform hyperparameter optimisation on all subsets of the pilot dataset, not just on the whole pilot data.", "Interestingly, Table 2 shows that the RMS difference between the two approaches to hyperparameter setting is greater when the pilot data is larger.", "This makes sense; the hyperparameters that are optimal on a large pilot dataset may be far from optimal on a very small subset (this is clearly visible in Figure 1 , where the items deviating most are those for the = 0.5 pilot data and hyperparameter choice).", "Conclusions and Future Work This paper introduced an extrapolation methodology for predicting accuracy on large dataset from a small pilot dataset, applied it to a document classification system, and identified the biased powerlaw model with binomial weights as a good baseline extrapolation model.", "This only scratches the surface of performance extrapolation tasks.", "We hope that teams with greater computational resources will study the extrapolation task for computationally more-demanding systems, including popular deep learning models.", "The power-law models should be considered baselines for more sophisticated extrapolation models, which might exploit more information than just accuracy on subsets of the pilot data.", "We hope this work will spur the development of better methods for estimating the resources needed to build an NLP or ML system to meet a specification, as we believe this is essential for any mature engineering field." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related work", "Models for extrapolating pilot data", "A performance extrapolation task", "Conclusions and Future Work" ] }
GEM-SciDuet-train-68#paper-1147#slide-7
Conclusions and future work
The eld need methods for predicting how much training data a system needs to achieve a target performance We introduced an extrapolation task for predicting a classi ers accuracy on a large dataset from a small pilot dataset Highlight the importance of hyperparameter tuning and item weighting Future work: extrapolation methods that dont require expensive hyperparameter optimisation
The eld need methods for predicting how much training data a system needs to achieve a target performance We introduced an extrapolation task for predicting a classi ers accuracy on a large dataset from a small pilot dataset Highlight the importance of hyperparameter tuning and item weighting Future work: extrapolation methods that dont require expensive hyperparameter optimisation
[]
GEM-SciDuet-train-69#paper-1148#slide-0
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-0
Multiword Expressions
Expressions of mul0ple words that can exhibit an idioma0c meaning
Expressions of mul0ple words that can exhibit an idioma0c meaning
[]
GEM-SciDuet-train-69#paper-1148#slide-1
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-1
Idiomatic vs Literal
(I) They pulled the plug on the Department of (L) Unfortunately someone pulled the sink plug (I) It caught him on the head and he went down seeing liAle sparkling stars (L) Its sDll dark enough to see the brightest stars
(I) They pulled the plug on the Department of (L) Unfortunately someone pulled the sink plug (I) It caught him on the head and he went down seeing liAle sparkling stars (L) Its sDll dark enough to see the brightest stars
[]
GEM-SciDuet-train-69#paper-1148#slide-2
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-2
Idiom Token Classifica0on
Determine if an MWE instance is idioma0c They pulled the plug on the project [IdiomaDc/Literal] Kick the bucket [mourir/frapper avec le pied] Keegan is ready to pull the plug on [a deal the tv]
Determine if an MWE instance is idioma0c They pulled the plug on the project [IdiomaDc/Literal] Kick the bucket [mourir/frapper avec le pied] Keegan is ready to pull the plug on [a deal the tv]
[]
GEM-SciDuet-train-69#paper-1148#slide-3
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-3
Overview of Approach
VNC token instances are represented via use of an embedding model
VNC token instances are represented via use of an embedding model
[]
GEM-SciDuet-train-69#paper-1148#slide-4
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-4
Lexico Syntactic Fixedness
The idioma0c meaning of an expression is typically restricted to a small number of lexico-syntac0c paVerns Ac0ve voice, no determiner, plural noun Ac0ve voice, determiner, singular noun Passive voice, plural noun
The idioma0c meaning of an expression is typically restricted to a small number of lexico-syntac0c paVerns Ac0ve voice, no determiner, plural noun Ac0ve voice, determiner, singular noun Passive voice, plural noun
[]
GEM-SciDuet-train-69#paper-1148#slide-5
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-5
Patterns
Afsaneh Fazly et al. 2009
Afsaneh Fazly et al. 2009
[]
GEM-SciDuet-train-69#paper-1148#slide-6
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-6
Canonical Form
Lexico-syntac0c paVerns that idioma0c usages tend to occur in Afsaneh Fazly et al. 2009
Lexico-syntac0c paVerns that idioma0c usages tend to occur in Afsaneh Fazly et al. 2009
[]
GEM-SciDuet-train-69#paper-1148#slide-7
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-7
Integrating Canonical Forms
Unsupervised method used in Fazly et al. to iden0fy canonical forms One-dimensional binary vector represen0ng if the expression is in the canonical form
Unsupervised method used in Fazly et al. to iden0fy canonical forms One-dimensional binary vector represen0ng if the expression is in the canonical form
[]
GEM-SciDuet-train-69#paper-1148#slide-11
1148
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations
Verb-noun combinations (VNCs) -e.g., blow the whistle, hit the roof, and see stars -are a common type of English idiom that are ambiguous with literal usages. In this paper we propose and evaluate models for classifying VNC usages as idiomatic or literal, based on a variety of approaches to forming distributed representations. Our results show that a model based on averaging word embeddings performs on par with, or better than, a previously-proposed approach based on skip-thoughts. Idiomatic usages of VNCs are known to exhibit lexico-syntactic fixedness. We further incorporate this information into our models, demonstrating that this rich linguistic knowledge is complementary to the information carried by distributed representations.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Multiword expressions (MWEs) are combinations of multiple words that exhibit some degree of idiomaticity (Baldwin and Kim, 2010) .", "Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (Fazly et al., 2009) .", "Many VNCs are ambiguous between MWEs and literal combinations, as in the following examples of see stars, in which 1 is an idiomatic usage (i.e., an MWE), while 2 is a literal combination.", "1 1.", "Hereford United were seeing stars at Gillingham after letting in 2 early goals 2.", "Look into the night sky to see the stars MWE identification is the task of automatically determining which word combinations at the token-level form MWEs (Baldwin and Kim, 2010) , and must be able to make such distinctions.", "This is particularly important for applications such as machine translation (Sag et al., 2002) , where the appropriate meaning of word combinations in context must be preserved for accurate translation.", "In this paper, following prior work (e.g., Salton et al., 2016) , we frame token-level identification of VNCs as a supervised binary classification problem, i.e., idiomatic vs. literal.", "We consider a range of approaches to forming distributed representations of the context in which a VNC occurs, including word embeddings (Mikolov et al., 2013) , word embeddings tailored to representing sentences (Kenter et al., 2016) , and skip-thoughts sentence embeddings (Kiros et al., 2015) .", "We then train a support vector machine (SVM) on these representations to classify unseen VNC instances.", "Surprisingly, we find that an approach based on representing sentences as the average of their word embeddings performs comparably to, or better than, the skip-thoughts based approach previously proposed by Salton et al.", "(2016) .", "VNCs exhibit lexico-syntactic fixedness.", "For example, the idiomatic interpretation in example 1 above is typically only accessible when the verb see has active voice, the determiner is null, and the noun star is in plural form, as in see stars or seeing stars.", "Usages with a determiner (as in example 2), a singular noun (e.g., see a star), or passive voice (e.g., stars were seen) typically only have the literal interpretation.", "In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of Fazly et al.", "(2009) -into our various embedding-based approaches.", "Our experimental results show that this leads to substantial improve-ments, indicating that this rich linguistic knowledge is complementary to that available in distributed representations.", "Related work Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., Fazly et al., 2009; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .", "Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., Fazly et al., 2009; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .", "In the most closely related work to ours, Salton et al.", "(2016) represent token instances of VNCs by embedding the sentence that they occur in using skip-thoughts (Kiros et al., 2015) -an encoderdecoder model that can be viewed as a sentencelevel counterpart to the word2vec (Mikolov et al., 2013 ) skip-gram model.", "During training the target sentence is encoded using a recurrent neural network, and is used to predict the previous and next sentences.", "Salton et al.", "then use these sentence embeddings, representing VNC token instances, as features in a supervised classifier.", "We treat this skip-thoughts based approach as a strong baseline to compare against.", "Fazly et al.", "(2009) formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).", "They then determine the canonical form, C(v, n), for a given VNC as follows: 2 C(v, n) = {pt k ∈ P |z(v, n, pt k ) > T z } (1) where P is the set of patterns, T z is a predetermined threshold, which is set to 1, and z(v, n, pt k ) is calculated as follows: z(v, n, pt k ) = f (v, n, pt k ) − f s (2) where f (·) is the frequency of a VNC occurring in a given pattern in a corpus, 3 and f and s are the mean and standard deviations for all patterns for the given VNC, respectively.", "Fazly et al.", "(2009) showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.", "This approach provides a strong, linguistically-informed, unsupervised baseline, referred to as CForm, for predicting whether VNC instances are idiomatic or literal.", "In this paper we incorporate knowledge of canonical forms into embedding-based approaches to VNC token classification, and show that this linguistic knowledge can be leveraged to improve such approaches.", "Models We describe the models used to represent VNC token instances below.", "For each model, a linear SVM classifier is trained on these representations.", "Word2vec We trained word2vec's skip-gram model (Mikolov et al., 2013 ) on a snapshot of Wikipedia from September 2015, which consists of approximately 2.6 billion tokens.", "We used a window size of ±8 and 300 dimensions.", "We ignore all words that occur less than fifteen times in the training corpus, and did not set a maximum vocabulary size.", "We perform negative sampling and set the number of training epochs to five.", "We used batch processing with approximately 10k words in each batch.", "To embed a given a sentence containing a VNC token instance, we average the word embeddings for each word in the sentence, including stopwords.", "4 Prior to averaging, we normalize each embedding to have unit length.", "Siamese CBOW The Siamese CBOW model (Kenter et al., 2016) learns word embeddings that are better able to represent a sentence through averaging than conventional word embeddings such as skip-gram or CBOW.", "We use a Siamese CBOW model that was pretrained on a snapshot of Wikipedia from November 2012 using randomly initialized word embeddings.", "5 Similarly to the word2vec model, to embed a given sentence containing a VNC instance, we average the word embeddings for each word in the sentence.", "Skip-thoughts We use a publicly-available skip-thoughts model, that was pre-trained on a corpus of books.", "6 We represent a given sentence containing a VNC instance using the skip-thoughts encoder.", "Note that this approach is our re-implementation of the skipthoughts based method of Salton et al.", "(2016) , and we use it as a strong baseline for comparison.", "Data and evaluation In this section, we discuss the dataset used in our experiments, and the evaluation of our models.", "Dataset We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by Fazly et al.", "(2009) and Salton et al.", "(2016) -to train and evaluate our models.", "This dataset consists of sentences containing VNC usages drawn from the British National Corpus (Burnard, 2000) , 7 along with a label indicating whether the VNC is an idiomatic or literal usage (or whether this cannot be determined, in which case it is labelled \"unknown\").", "VNC-Tokens is divided into DEV and TEST sets that each include fourteen VNC types and a total of roughly six hundred instances of these types annotated as literal or idiomatic.", "Following Salton et al.", "(2016) , we use DEV and TEST, and ignore all token instances annotated as \"unknown\".", "Fazly et al.", "(2009) We then divide each of these into training and testing sets, using the same ratios of idiomatic to literal usages for each expression as Salton et al.", "(2016) .", "This allows us to develop and tune a model on DEV, and then determine whether, when retrained on instances of unseen VNCs in (the training portion of) TEST, that model is able to generalize to new VNCs without further tuning to the specific expressions in TEST.", "Evaluation The proportion of idiomatic usages in the testing portions of both DEV and TEST is 63%.", "We therefore use accuracy to evaluate our models following Fazly et al.", "(2009) because the classes are roughly balanced.", "We randomly divide both DEV and TEST into training and testing portions ten times, following Salton et al.", "(2016) .", "For each of the ten runs, we compute the accuracy for each expression, and then compute the average accuracy over the expressions.", "We then report the average accuracy over the ten runs.", "Experimental results In this section we first consider the effect of tuning the cost parameter of the SVM for each model on DEV, and then report results on DEV and TEST using the tuned models.", "Parameter tuning We tune the SVM for each model on DEV by carrying out a linear search for the penalty cost from 0.01-100, increasing by a factor of ten each time.", "Results for this parameter tuning are shown in Table 1 .", "These results highlight the importance of choosing an appropriate setting for the penalty cost.", "For example, the accuracy of the word2vec model ranges from 0.619-0.830 depending on the cost setting.", "In subsequent experiments, for each model, we use the penalty cost that achieves the highest accuracy in Table 1 .", "DEV and TEST results In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of Fazly et al.", "(2009) , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.", "We further consider each model (other than CForm) in two setups.", "−CF corresponds to the models as described in Section 3.", "+CF further incorporates lexico-syntactic knowledge of canonical forms into each model by concatenating the embedding representing each VNC token instance with a one-dimensional vector which is one if the VNC occurs in its canonical form, and zero otherwise.", "We first consider results for the −CF setup.", "On both DEV and TEST, the accuracy achieved by each supervised model is higher than that of the unsupervised CForm approach, except for Siamese CBOW on TEST.", "The word2vec model achieves the highest accuracy on DEV and TEST of 0.830 and 0.804, respectively.", "The difference between the word2vec model and the next-best model, skip-thoughts, is significant using a bootstrap test (Berg-Kirkpatrick et al., 2012) with 10k repetitions for DEV (p = 0.006), but not for TEST (p = 0.051).", "Nevertheless, it is remarkable that the relatively simple approach to averaging word embeddings used by word2vec performs as well as, or better than, the much more complex skipthoughts model used by Salton et al.", "(2016) .", "8 8 The word2vec and skip-thoughts models were trained on different corpora, which could contribute to the differences in results for these models.", "We therefore carried out an additional experiment in which we trained word2vec on Book-Corpus, the corpus on which skip-thoughts was trained.", "This new word2vec model achieved accuracies of 0.825 and 0.809, on DEV and TEST, respectively, which are also higher accu-Turning to the +CF setup, we observe that, for both DEV and TEST, each model achieves higher accuracy than in the −CF setup.", "9 All of these differences are significant using a bootstrap test (p < 0.002 in each case).", "In addition, each method outperforms the unsupervised CForm approach on both DEV and TEST.", "These findings demonstrate that the linguistically-motivated, lexico-syntactic knowledge encoded by the canonical form feature is complementary to the information from a wide range of types of distributed representations.", "In the +CF setup, the word2vec model again achieves the highest accuracy on both DEV and TEST of 0.854 and 0.852, respectively.", "10 The difference between the word2vec model and the next-best model, again skip-thoughts, is significant for both DEV and TEST using a bootstrap test (p < 0.05 in each case).", "To better understand the impact of the canonical form feature when combined with the word2vec model, we compute the average precision, recall, and F1 score for each MWE for both the positive (idiomatic) and negative (literal) classes, for each run on TEST.", "11 For a given run, we then compute the average precision, recall, and F1 score across all MWEs, and then the average over all ten runs.", "We do this using CForm, and the word2vec model with and without the canonical form feature.", "Results are shown in Table 3 .", "In line with the findings of Fazly et al.", "(2009) , CForm achieves higher precision and recall on idiomatic usages than literal ones.", "In particular, the relatively low recall for the literal class indicates that many literal usages occur in a canonical form.", "Comparing the word2vec model with and without the canonical form feature, we see that, when this feature is used, there is a relatively larger increase in precision and recall (and F1 score) for the literal class, than for the idiomatic class.", "This indicates that, although the racies than those obtained by the skip-thoughts model.", "9 In order to determine that this improvement is due to the information about canonical forms carried by the additional feature in the +CF setup, and not due to the increase in number of dimensions, we performed additional experiments in which we concatenated the embedding representations with a random binary feature, and with a randomly chosen value between 0 and 1.", "For each model, neither of these approaches outperformed that model using the +CF setup.", "10 In the +CF setup, the word2vec model using embeddings that were trained on the same corpus as skip-thoughts achieved accuracies of 0.846 and 0.851, on DEV and TEST, respectively.", "These are again higher accuracies than the corresponding setup for the skip-thoughts model.", "11 We carried out the same analysis on DEV.", "The findings were similar.", "Conclusions Determining whether a usage of a VNC is idiomatic or literal is important for applications such as machine translation, where it is vital to preserve the meanings of word combinations.", "In this paper we proposed two approaches to the task of classifying VNC token instances as idiomatic or literal based on word2vec embeddings and Siamese CBOW.", "We compared these approaches against a linguistically-informed unsupervised baseline, and a model based on skip-thoughts previously applied to this task (Salton et al., 2016) .", "Our experimental results show that a comparatively simple approach based on averaging word embeddings performs at least as well as, or better than, the approach based on skip-thoughts.", "We further proposed methods to combine linguistic knowledge of the lexico-syntactic fixedness of VNCs -socalled \"canonical forms\", which can be automatically acquired from corpora via statistical methods -with the embedding based approaches.", "Our findings indicate that this rich linguistic knowledge is complementary to that available in distributed representations.", "Alternative approaches to embedding sentences containing VNC instances could also be considered, for example, FastSent (Hill et al., 2016) .", "However, all of the models we used represent the context of a VNC by the sentence in which it occurs.", "In future work we therefore also intend to consider approaches such as context2vec (Melamud et al., 2016) which explicitly encode the context in which a token occurs.", "Finally, one known challenge of VNC token classification is to develop models that are able to generalize to VNC types that were not seen during training (Gharbieh et al., 2016) .", "In future work we plan to explore this experimental setup." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Related work", "Models", "Word2vec", "Siamese CBOW", "Skip-thoughts", "Data and evaluation", "Dataset", "Evaluation", "Experimental results", "Parameter tuning", "DEV and TEST results", "Conclusions" ] }
GEM-SciDuet-train-69#paper-1148#slide-11
Conclusion
Averaging word2vec embeddings outperforms all other models used Canonical form feature improves results
Averaging word2vec embeddings outperforms all other models used Canonical form feature improves results
[]
GEM-SciDuet-train-70#paper-1153#slide-0
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-0
Agenda
Issue of false-alarm self-labeled data Disambiguation of hashtag usages
Issue of false-alarm self-labeled data Disambiguation of hashtag usages
[]
GEM-SciDuet-train-70#paper-1153#slide-1
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-1
Self Labeled Data
Large amount of self-labeled data available on the Internet are popular research materials in many NLP areas. Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models. The tweets with a certain types of hashtags are collected as self-label data in a variety of research works.
Large amount of self-labeled data available on the Internet are popular research materials in many NLP areas. Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models. The tweets with a certain types of hashtags are collected as self-label data in a variety of research works.
[]
GEM-SciDuet-train-70#paper-1153#slide-2
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-2
Irony Detection with Hashtag Information
It is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony. Alternatively, collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach. @Anonymous doing a great job... #not What do I pay my extortionate council taxes for? #Disgrace #OngoingProblem http://t.co/FQZUUwKSoN
It is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony. Alternatively, collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach. @Anonymous doing a great job... #not What do I pay my extortionate council taxes for? #Disgrace #OngoingProblem http://t.co/FQZUUwKSoN
[]
GEM-SciDuet-train-70#paper-1153#slide-3
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-3
False alarm Issue
The reliability of issue. the self-labeled data is an important Not all tweet writers know the definition of irony BestProAdvice @Anonymous More clean OR cleaner, never more cleaner. #irony
The reliability of issue. the self-labeled data is an important Not all tweet writers know the definition of irony BestProAdvice @Anonymous More clean OR cleaner, never more cleaner. #irony
[]
GEM-SciDuet-train-70#paper-1153#slide-4
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-4
Hashtags Functioning as Content Words
A hashtag in a tweet may also function as a content word in its word form. The removal of the hashtag can change the meaning of the tweet, or even make the tweet grammatically incomplete. The #irony of taking a break from reading about #socialmedia to check my social media.
A hashtag in a tweet may also function as a content word in its word form. The removal of the hashtag can change the meaning of the tweet, or even make the tweet grammatically incomplete. The #irony of taking a break from reading about #socialmedia to check my social media.
[]
GEM-SciDuet-train-70#paper-1153#slide-5
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-5
Research Goal
Two kinds of unreliable data are our targets to remove from the training data for irony detection. The tweets with a misused hashtag The tweets in which the hashtag serves as a content word, Compared to general training data cleaning approaches, our work leverages the characteristics of hashtag usages in tweets. With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.
Two kinds of unreliable data are our targets to remove from the training data for irony detection. The tweets with a misused hashtag The tweets in which the hashtag serves as a content word, Compared to general training data cleaning approaches, our work leverages the characteristics of hashtag usages in tweets. With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.
[]
GEM-SciDuet-train-70#paper-1153#slide-6
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-6
Dataset
The ground-truth is based on the dataset released for The hashtag itself has been removed in the SemEval dataset. The hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing. We recover the original tweets by using Twitter search. Hashtag False-Alarm Irony Total
The ground-truth is based on the dataset released for The hashtag itself has been removed in the SemEval dataset. The hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing. We recover the original tweets by using Twitter search. Hashtag False-Alarm Irony Total
[]
GEM-SciDuet-train-70#paper-1153#slide-7
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-7
Disambiguation of Hashtags
Word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders. Lengths of the tweet in words and in characters. Type of the target hashtag Number of all hashtags tweet. in the If the targeting hashtag is the irst/last f token in the tweet. If the targeting hashtag is f irst/last hashtag in the tweet the Position of the targeting hashtag A tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word. On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata. GRU-based language model on the level of POS tagging is used to measure the grammatical completeness of the tweet with and without the hashtag. Remove the whole hashtag removed. Remove the hash symbol # only.
Word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders. Lengths of the tweet in words and in characters. Type of the target hashtag Number of all hashtags tweet. in the If the targeting hashtag is the irst/last f token in the tweet. If the targeting hashtag is f irst/last hashtag in the tweet the Position of the targeting hashtag A tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word. On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata. GRU-based language model on the level of POS tagging is used to measure the grammatical completeness of the tweet with and without the hashtag. Remove the whole hashtag removed. Remove the hash symbol # only.
[]
GEM-SciDuet-train-70#paper-1153#slide-8
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-8
Results of Hashtag Disambiguation
By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used. The best model is the one integrating the attentive GRU encoder, which is significantly superior The addition of language model significantly improves the performance Model Encoder Precision Recall F-score Our Model GRU Our Model Att.GRU Without LM Att.GRU
By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used. The best model is the one integrating the attentive GRU encoder, which is significantly superior The addition of language model significantly improves the performance Model Encoder Precision Recall F-score Our Model GRU Our Model Att.GRU Without LM Att.GRU
[]
GEM-SciDuet-train-70#paper-1153#slide-9
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-9
Training Data Pruning for Irony Detection
We employ our model to prune self-labeled data for irony detection. A set of tweets that contain indication hashtags as (pseudo) positive instances A set of tweets that do not contain indication hashtags as negative instances. Our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the false-alarm ones are discarded.
We employ our model to prune self-labeled data for irony detection. A set of tweets that contain indication hashtags as (pseudo) positive instances A set of tweets that do not contain indication hashtags as negative instances. Our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the false-alarm ones are discarded.
[]
GEM-SciDuet-train-70#paper-1153#slide-10
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-10
Results on Irony Detection
We implement a state-of-the-art irony detector, which is based on attentive-RNN classifier, and train it on the prior- and the post- pruned training data. The irony detection model trained on the less, but cleaner instances significantly outperforms the model that is trained on all data (p < The irony detector trained on the small genuine data does not compete with the models that are trained on larger amount of self- labeled data. Data Size Precision Recall F-score Prior-Pruning Post-Pruning Human Verified
We implement a state-of-the-art irony detector, which is based on attentive-RNN classifier, and train it on the prior- and the post- pruned training data. The irony detection model trained on the less, but cleaner instances significantly outperforms the model that is trained on all data (p < The irony detector trained on the small genuine data does not compete with the models that are trained on larger amount of self- labeled data. Data Size Precision Recall F-score Prior-Pruning Post-Pruning Human Verified
[]
GEM-SciDuet-train-70#paper-1153#slide-11
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-11
Different Threshold Values for Data Pruning
We can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold. The higher the threshold value is set, the less the training instances remain. The best result achieved by the detector trained on the 9,234 irony data f iltered by our model with the default This confirms that our model is able to select useful training instances in a strict manner The bullet symbol () indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.
We can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold. The higher the threshold value is set, the less the training instances remain. The best result achieved by the detector trained on the 9,234 irony data f iltered by our model with the default This confirms that our model is able to select useful training instances in a strict manner The bullet symbol () indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.
[]
GEM-SciDuet-train-70#paper-1153#slide-12
1153
Disambiguating False-Alarm Hashtag Usages in Tweets for Irony Detection
The reliability of self-labeled data is an important issue when the data are regarded as ground-truth for training and testing learning-based models. This paper addresses the issue of false-alarm hashtags in the self-labeled data for irony detection. We analyze the ambiguity of hashtag usages and propose a novel neural networkbased model, which incorporates linguistic information from different aspects, to disambiguate the usage of three hashtags that are widely used to collect the training data for irony detection. Furthermore, we apply our model to prune the self-labeled training data. Experimental results show that the irony detection model trained on the less but cleaner training instances outperforms the models trained on all data.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction Self-labeled data available on the Internet are popular research materials in many NLP areas.", "Metadata such as tags and emoticons given by users are considered as labels for training and testing learning-based models, which usually benefit from large amount of data.", "One of the sources of self-labeled data widely used in the research community is Twitter, where the short-text messages tweets written by the crowd are publicly shared.", "In a tweet, the author can tag the short text with some hashtags such as #excited, #happy, #UnbornLivesMatter, and #Hillary4President to express their emotion or opinion.", "The tweets with a certain types of hashtags are collected as self-label data in a variety of research works including sentiment analysis (Qadir and Riloff, 2014) , stance detection (Mohammad et al., 2016; Sobhani et al., 2017) , fi-nancial opinion mining (Cortis et al., 2017) , and irony detection (Ghosh et al., 2015; Peled and Reichart, 2017; Hee et al., 2018) .", "In the case of irony detection, it is impractical to manually annotate the ironic sentences from randomly sampled data due to the relatively low occurrences of irony (Davidov et al., 2010) .", "Collecting the tweets with the hashtags like #sarcasm, #irony, and #not becomes the mainstream approach to dataset construction (Sulis et al., 2016) .", "As shown in (S1), the tweet with the hashtag #not is treated as a positive (ironic) instance by removing #not from the text.", "(S1) @Anonymous doing a great job... #not What do I pay my extortionate council taxes for?", "#Disgrace #Ongo-ingProblem http://t.co/FQZUUwKSoN However, the reliability of the self-labeled data is an important issue.", "As pointed out in the pioneering work, not all tweet writers know the definition of irony (Van Hee et al., 2016b) .", "For instance, (S2) is tagged with #irony by the writer, but it is just witty and amusing.", "(S2) BestProAdvice @Anonymous More clean OR cleaner, never more cleaner.", "#irony When the false-alarm instances like (S2) are collected and mixed in the training and test data, the models that learn from the unreliable data may be misled, and the evaluation is also suspicious.", "The other kind of unreliable data comes from the hashtags not only functioning as metadata.", "That is, a hashtag in a tweet may also function as a content word in its word form.", "For example, the hashtag #irony in (S3) is a part of the sentence \"the irony of taking a break...\", in contrast to the hashtag #not in (S1), which can be removed without a change of meaning.", "(S3) The #irony of taking a break from reading about #socialmedia to check my social media.", "When the hashtag plays as a content word in a tweet, the tweet is not a good candidate of selflabeled ironic instances because the sentence will be incomplete once the hashtag is removed.", "In this work, both kinds of unreliable data, the tweets with a misused hashtag and the tweets in which the hashtag serves as a content word, are our targets to remove from the training data.", "Manual data cleaning is labor-intensive and inefficient (Van Hee et al., 2016a) .", "Compared to general training data cleaning approaches (Malik and Bhardwaj, 2011; Esuli and Sebastiani, 2013; Fukumoto and Suzuki, 2004) such as boostingbased learning, this work leverages the characteristics of hashtag usages in tweets.", "With small amount of golden labeled data, we propose a neural network classifier for pruning the self-labeled tweets, and train an ironic detector on the less but cleaner instances.", "This approach is easily to apply to other NLP tasks that rely on self-labeled data.", "The contributions of this work are three-fold: (1) We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data.", "(2) We propose a model for hashtag disambiguation.", "For this task, the human-verified ground-truth is quite limited.", "To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed.", "(3) The data pruning method, in which our model is applied to select reliable self-labeled data, is capable of improving the performance of irony detection.", "The rest of this paper is organized as follows.", "Section 2 describes how we construct a dataset for disambiguating false-alarm hashtag usages based on Tweets.", "In Section 3, our model for hashtag disambiguation is proposed.", "Experimental results of hashtag disambiguation are shown in Section 4.", "In addition, we apply our method to prune training data for irony detection.", "The results are shown in Section 5.", "Section 6 concludes this paper.", "Dataset The tweets with indication hashtags such as #irony are usually collected as a dataset in previous works on irony detection.", "As pointed out in Section 1, the hashtags are treated as ground-truth for training and testing.", "To investigate the issue of false-alarm self-labeled tweets, the tweets with human verification are indispensable.", "In this study, we build the ground-truth based on the dataset released for SemEval 2018 Task 3, 1 which is targeted for finegrained irony detection (Hee et al., 2018) .", "In the SemEval dataset, the tweets with one of the three indication hashtags #not, #sarcasm, and #irony, are collected and human-annotated as one of four types: verbal irony by means of a polarity contrast, other verbal irony, situational irony, and non-ironic.", "In other words, the false-alarm tweets, i.e., the non-ironic tweets with indication hashtags, are distinguished from the real ironic tweets in this dataset.", "However, the hashtag itself has been removed in the SemEval dataset.", "For example, the original tweet (S1) has been modified to (S4), where the hashtag #not disappears.", "As a result, the hashtag information, the position and the word form of the hashtag (i.e., not, irony, or sarcasm), is missing from the SemEval dataset.", "(S4) @Anonymous doing a great job... What do I pay my extortionate council taxes for?", "#Disgrace #OngoingProblem http://t.co/FQZUUwKSoN For hashtag disambiguation, the information of the hashtag in each tweet is mandatory.", "Thus, we recover the original tweets by using Twitter search.", "As shown in Table 1 , a total of 1,359 tweets with hashtags information are adopted as the ground-truth.", "Note that more than 20% of selflabeled data are false-alarm, and this can be an issue when they are adopted as training or test data.", "For performing the experiment of irony detection in Section 5, we reserve the other 1,072 tweets in the SemEval dataset that are annotated as real ironic as the test data.", "In addition to the issue of hashtag disambiguation, the irony tweets without an indication hashtag, which are regarded as non-irony instances in previous work, are another kind of misleading data for irony detection.", "Fortunately, the occurrence of such \"false-negative\" instances is insignificant due to the relatively low occurrence of irony (Davidov et al., 2010) .", "Disambiguation of Hashtags Figure 1 shows our model for distinguishing the real ironic tweets from the false-alarm ones.", "Given an instance with the hashtag #irony is given, the preceding and the following word sequences of the hashtag are encoded by separate sub-networks, and both embeddings are concatenated with the handcrafted features and the probabilities of three kinds of part-of-speech (POS) tag sequences.", "Finally, the sigmoid activation function decides whether the instance is real ironic or false-alarm.", "The details of each component will be presented in the rest of this section.", "Word Sequences: The word sequences of the context preceding and following the targeting hashtag are separately encoded by neural network sentence encoders.", "The Penn Treebank Tokenizer provided by NLTK (Bird et al., 2009 ) is used for tokenization.", "As a result, each of the left and the right word sequences is encoded as a embedding with a length of 50.", "We experiments with convolution neural network (CNN) (Kim, 2014) , gated recurrent unit (GRU) (Cho et al., 2014) , and attentive-GRU for sentence encoding.", "CNN for sentence classification has been shown effective in NLP applications such as sentiment analysis (Kim, 2014) .", "Classifiers based on recurrent neural network (RNN) have also been applied to NLP, especially for sequential modeling.", "For irony detection, one of the state-of-the-art models is based on the attentive RNN (Huang et al., 2017) .", "The first layer of the CNN, the GRU, and the attenive-GRU model is the 300-dimensional word embedding that is initialized by using the vectors pre-trained on Google News dataset.", "2 Handcrafted Features: We add the handcrafted features of the tweet in the one-hot representation.", "The features taken into account are listed as follows.", "(1) Lengths of the tweet in words and in characters.", "(2) Type of the target hashtag (i.e.", "#not, #sarcasm, or #irony).", "(3) Number of all hashtags in the tweet.", "(4) Whether the targeting hashtag is the first token in the tweet.", "(5) Whether the targeting hashtag is the last token in the tweet.", "(6) Whether the targeting hashtag is the first hashtag in the tweet since a tweet may contain more than one hashtag.", "(7) Whether the targeting hashtag is the last hashtag in the tweet.", "(8) Position of the targeting hashtag in terms of tokens.", "If the targeting hashtag is the ith token of the tweet with |w| tokens, and this feature is i |w| .", "(9) Position of the targeting hashtag in all hashtags in the tweet.", "It is computed as j |h| where the targeting hashtag is the jth hashtag in the tweet that contains |h| hashtags.", "Language Modeling of POS Sequences: As mentioned in Section 1, a kind of false-alarm hashtag usages is the case that the hashtag also functions as a content word.", "In this paper, we attempt to measure the grammatical completeness of the tweet with and without the hashtag.", "Therefore, language model on the level of POS tagging is used.", "As shown in Figure 1 , POS tagging is performed on three versions of the tweet, and based on that three probabilities are measured and taken into account: 1) ph: the tweet with the whole hashtag removed.", "2) ps: the tweet with the hash symbol # removed only.", "3) p t : the original tweet.", "Our idea is that a tweet will be more grammatical complete with only the hash symbol removed if the hashtag is also a content word.", "On the other hand, the tweet will be more grammatical complete with the whole hashtag removed since the hashtag is a metadata.", "To measure the probability of the POS tag sequence, we integrate a neural network-based language model of POS sequence into our model.", "RNN-based language models are reportedly capa-ble of modeling the longer dependencies among the sequential tokens (Mikolov et al., 2011) .", "Two millions of English tweets that are entirely different from those in the training and test data described in Section 2 are collected and tagged with POS tags.", "We train a GRU language model on the level of POS tags.", "In this work, all the POS tagging is performed with the Stanford CoreNLP toolkit (Manning et al., 2014) .", "Experiments We compare our model with popular neural network-based sentence classifiers including CNN, GRU, and attentive GRU.", "We also train a logistic regression (LR) classifier with the handcrafted features introduced in Section 3.", "For the imbalance data, we assign class-weights inversely proportional to class frequencies.", "Five-fold crossvalidation is performed.", "Early-stop is employed with a patience of 5 epoches.", "In each fold, we further keep 10% of training data for tuning the model.", "The hidden dimension is 50, the batch size is 32, and the Adam optimizer is employed (Kingma and Ba, 2014) .", "Table 2 shows the experimental results reported in Precision (P), Recall (R), and F-score (F).", "Our goal is to select the real ironic tweets for training the irony detection model.", "Thus, the real ironic tweets are regarded as positive, and the falsealarm ones are negative.", "We apply t-test for significance testing.", "The vanilla GRU and attentive GRU are slightly superior to the logistic regression model.", "The CNN model performs the worst in this task because it suffers from over-fitting problem.", "We explored a number of layouts and hyperparameters for the CNN model, and consistent results are observed.", "Our method is evaluated with either CNN, GRU, or attentive GRU for encoding the context preceding and following the targeting hashtag.", "By integrating various kinds of information, our method outperforms all baseline models no matter which encoder is used.", "The best model is the one integrating the attentive GRU encoder, which is significantly superior to all baseline models (p < 0.05), achieves an F-score of 88.49%, To confirm the effectiveness of the language modeling of POS sequence, we also try to exclude the GRU language model from our best model.", "Experimental results show that the addition of language model significantly improves the perfor- mance (p < 0.05).", "As shown in the last row of Table 2 , the F-score is dropped to 84.17%.", "From the data, we observe that the instances whose ps ph usually contain a indication hashtag function as a content word, and vice versa.", "For instances, (S5) and (S6) show the instances with the highest and the lowest ps ph , respectively.", "(S5) when your #sarcasm is so advanced people actually think you are #stupid .. (S6) #mtvstars justin bieber #net #not #fast 5 Irony Detection We employ our model to prune self-labeled data for irony detection.", "As prior work did, we collect a set of tweets that contain indication hashtags as (pseudo) positive instances and also collect a set of tweets that do not contain indication hashtags as negative instances.", "For each positive instance, our model is performed to predict whether it is a real ironic tweet or false-alarm ones, and the falsealarm ones are discarded.", "After pruning, a set of 14,055 tweets containing indication hashtags have been reduced to 4,617 reliable positive instances according to our model.", "We add an equal amount of negative instances randomly selected from the collection of the tweets that do not contain indication hashtags.", "As a result, the prior-and the post-pruning training data, in the sizes of 28,110 and 9,234, respectively, are prepared for experiments.", "The dataflow of the training data pruning is shown in Figure 2 .", "For evaluating the effectiveness of our pruning method, we implement a state-of-the-art irony detector (Huang et al., 2017) , which is based on attentive-RNN classifier, and train it on the priorand the post-pruned training data.", "The test data is made by the procedure as follows.", "The positive instances in the test data are taken from the 1,072 human-verified ironic tweets that are reserved for irony detection as mentioned in Section 2.", "The negative instances in the test data are obtained from the tweets that do not contain indication hashtags.", "Note that the negative instances in the test data are isolated from those in the training data.", "Experimental results confirm the benefit of pruning.", "As shown in Table 3 , the irony detection model trained on the less, but cleaner data significantly outperforms the model that is trained on all data (p < 0.05).", "We compare our pruning method with an alternative approach that trains the irony detector on the human-verified data directly.", "Under this circumstances, the 1,083 ironic instances for training our hashtag disambiguation model are currently mixed with an equal amount of randomly sampled negative instances, and employed to train the irony detector.", "As shown in the last row of Table 3 , the irony detector trained on the small data does not compete with the models that are trained on larger amount of self-labeled data.", "In other words, our data pruning strategy forms a semi-supervised learning that benefits from both self-labeled data and human annotation.", "Note that this task and the dataset are different from those of the official evaluation of SemEval 2018 Task 3, so the experimental results cannot be directly compared.", "The calibrated confidence output by the sigmoid layer of our hashtag disambiguation model can be regarded as a measurement of the reliability of an instance (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017) .", "Thus, we can sort all self-labeled data by their calibrated confidence and control the size of training set by adjusting the threshold.", "The higher the threshold value is set, the less the training instances remain.", "Figure 3 shows the performances of the irony detector trained on the data filtered with different threshold values.", "For each threshold value, the bullet symbol (•) indicates the size of training data, and the bar indicates the F-score achieved by the irony detector trained on those data.", "The best result achieved by the irony detector trained on the 9,234 data filtered by our model with the default threshold value (0.5).", "This confirms that our model is able to select useful training instances in a strict manner.", "Conclusion Self-labeled data is an accessible and economical resource for a variety of learning-based applications.", "However, directly using the labels made by the crowd as ground-truth for training and testing may lead to inaccurate performance due to the reliability issue.", "This paper addresses this issue in the case of irony detection by proposing a model to remove two kinds of false-alarm tweets from the training data.", "Experimental results confirm that the irony detection model benefits from the less, but cleaner training data.", "Our approach can be applied to other topics that rely on self-labeled data." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Disambiguation of Hashtags", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-70#paper-1153#slide-12
Conclusions
We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data. We propose a model for hashtag disambiguation. For this task, the human-verified ground-truth is quite limited. To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed. The data pruning method is capable of improving the performance of irony detection, and can be applied to other work relied on self-labeled data.
We make an empirically study on an issue that is potentially inherited in a number of research topics based on self-labeled data. We propose a model for hashtag disambiguation. For this task, the human-verified ground-truth is quite limited. To address the issue of sparsity, a novel neural network model for hashtag disambiguation is proposed. The data pruning method is capable of improving the performance of irony detection, and can be applied to other work relied on self-labeled data.
[]
GEM-SciDuet-train-71#paper-1160#slide-4
1160
Parallelizable Stack Long Short-Term Memory
Stack Long Short-Term Memory (StackL-STM) is useful for various applications such as parsing and string-to-tree neural machine translation, but it is also known to be notoriously difficult to parallelize for GPU training due to the fact that the computations are dependent on discrete operations. In this paper, we tackle this problem by utilizing state access patterns of StackLSTM to homogenize computations with regard to different discrete operations. Our parsing experiments show that the method scales up almost linearly with increasing batch size, and our parallelized PyTorch implementation trains significantly faster compared to the Dynet C++ implementation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Tree-structured representation of language has been successfully applied to various applications including dependency parsing (Dyer et al., 2015) , sentiment analysis (Socher et al., 2011) and neural machine translation (Eriguchi et al., 2017) .", "However, most of the neural network architectures used to build tree-structured representations are not able to exploit full parallelism of GPUs by minibatched training, as the computation that happens for each instance is conditioned on the input/output structures, and hence cannot be naïvely grouped together as a batch.", "This lack of parallelism is one of the major hurdles that prevent these representations from wider adoption practically (e.g., neural machine translation), as many natural language processing tasks currently require the ability to scale up to very large training corpora in order to reach state-of-theart performance.", "We seek to advance the state-of-the-art of this problem by proposing a parallelization scheme for one such network architecture, the Stack Long Short-Term Memory (StackLSTM) proposed in Dyer et al.", "(2015) .", "This architecture has been successfully applied to dependency parsing (Dyer et al., 2015 (Dyer et al., , 2016 Ballesteros et al., 2017) and syntax-aware neural machine translation (Eriguchi et al., 2017) in the previous research literature, but none of these research results were produced with minibatched training.", "We show that our parallelization scheme is feasible in practice by showing that it scales up near-linearly with increasing batch size, while reproducing a set of results reported in (Ballesteros et al., 2017) .", "StackLSTM StackLSTM (Dyer et al., 2015) is an LSTM architecture (Hochreiter and Schmidhuber, 1997) augmented with a stack H that stores some of the hidden states built in the past.", "Unlike traditional LSTMs that always build state h t from h t−1 , the states of StackLSTM are built from the head of the state stack H, maintained by a stack top pointer p(H).", "At each time step, StackLSTM takes a realvalued input vector together with an additional discrete operation on the stack, which determines what computation needs to be conducted and how the stack top pointer should be updated.", "Throughout this section, we index the input vector (e.g.", "word embeddings) x t using the time step t it is fed into the network, and hidden states in the stack h j using their position j in the stack H, j being defined as the 0-base index starting from the stack bottom.", "The set of input discrete actions typically contains at least Push and Pop operations.", "When these operations are taken as input, the corresponding computations on the StackLSTM are listed below: 1 (Dyer et al., 2015) .", "S0 and B0 refers to the token-level representation corresponding to the top element of the stack and buffer, while S1 and B1 refers to those that are second to the top.", "We use a different notation here to avoid confusion with the states in StackLSTM, which represent non-local information beyond token-level.", "Reflecting on the aforementioned discussion on parallelism, one should notice that StackLSTM falls into the category of neural network architectures that is difficult to perform minibatched training.", "This is caused by the fact that the computation performed by StackLSTM at each time step is dependent on the discrete input actions.", "The following section proposes a solution to this problem.", "Parallelizable StackLSTM Continuing the formulation in the previous section, we will start by discussing our proposed solution under the case where the set of discrete actions contains only Push and Pop operations; we then move on to discussion of the applicability of our proposed solution to the transition systems that are used for building representations for dependency trees.", "The first modification we perform to the Push and Pop operations above is to unify the pointer update of these operations as p(H) ← p(H) + op, where op is the input discrete operation that could either take the value +1 or -1 for Push and Pop operation.", "After this modification, we came to the following observations: Now, what remains to homogenize Push and Pop operations is conducting the extra computations needed for Push operation when Pop is fed in as well, while guaranteeing the correctness of the resulting hidden state both in the current time step and in the future.", "The next observation points out a way for this guarantee: Observation 2 In a StackLSTM, given the current stack top pointer position p(H), any hidden state h i where i > p(H) will not be read until it is overwritten by a Push operation.", "What follows from this observation is the guarantee that we can always safely overwrite hidden states h i that are indexed higher than the current stack top pointer, because it is known that any read operation on these states will happen after another overwrite.", "This allows us to do the extra computation anyway when Pop operation is fed, because the extra computation, especially updating h p(H)+1 , will not harm the validity of the hidden states at any time step.", "Algorithm 1 gives the final forward computation for the Parallelizable StackLSTM.", "Note that this algorithm does not contain any if-statements that depends on stack operations and hence is homogeneous when grouped into batches that are consisted of multiple operations trajectories.", "In transition systems (Nivre, 2008; Kuhlmann et al., 2011) used in real tasks (e.g., transition-based parsing) as shown in Table 1 , it should be noted that more than push and pop operations are needed for the StackLSTM.", "Fortunately, for Arc-Eager and h_prev ← h p(H) ; h ← LSTM(x t , h_prev); h p(H)+1 ← h; p(H) ← p(H) + op; return h p(H) ; Arc-Hybrid transition systems, we can simply add a hold operation, which is denoted by value 0 for the discrete operation input.", "For that reason, we will focus on parallelization of these two transition systems for this paper.", "It should be noted that both observations discussed above are still valid after adding the hold operation.", "Experiments Setup We implemented 2 the architecture described above in PyTorch (Paszke et al., 2017) .", "We implemented the batched stack as a float tensor wrapped in a non-leaf variable, thus enabling in-place operations on that variable.", "At each time step, the batched stack is queried/updated with a batch of stack head positions represented by an integer vector, an operation made possible by gather operation and advanced indexing.", "Due to this implementation choice, the stack size has to be determined at initialization time and cannot be dynamically grown.", "Nonetheless, a fixed stack size of 150 works for all the experiments we conducted.", "We use the dependency parsing task to evaluate the correctness and the scalability of our method.", "a test set accuracy of 97.47%.", "We use exactly the same pre-trained English word embedding as Dyer et al.", "(2015) .", "We use Adam (Kingma and Ba, 2014) as the optimization algorithm.", "Following Goyal et al.", "(2017) , we apply linear warmup to the learning rate with an initial value of τ = 5 × 10 −4 and total epoch number of 5.", "The target learning rate is set by τ multiplied by batch size, but capped at 0.02 because we find Adam to be unstable beyond that learning rate.", "After warmup, we reduce the learning rate by half every time there is no improvement for loss value on the development set (ReduceLROnPlateau).", "We clip all the gradient norms to 5.0 and apply a L 2 -regularization with weight 1 × 10 −6 .", "We started with the hyper-parameter choices in Dyer et al.", "(2015) but made some modifications based on the performance on the development set: we use hidden dimension 200 for all the LSTM units, 200 for the parser state representation before the final softmax layer, and embedding dimension 48 for the action embedding.", "We use Tesla K80 for all the experiments, in order to compare with Neubig et al.", "(2017b); Dyer et al.", "(2015) .", "We also use the same hyper-parameter setting as Dyer et al.", "(2015) for speed comparison experiments.", "All the speeds are measured by running through one training epoch and averaging.", "close to linear, which means there is very little overhead associated with our batching scheme.", "Quantitatively, according to Amdahl's Law (Amdahl, 1967) , the proportion of parallelized computations is 99.92% at batch size 64.", "We also compared our implementation with the implementation that comes with Dyer et al.", "(2015) , which is implemented in C++ with DyNet (Neubig et al., 2017a) .", "DyNet is known to be very optimized for CPU computations and hence their implementation is reasonably fast even without batching and GPU acceleration, as shown in Figure 1 .", "4 But we would like to point out that we focus on the speed-up we are able to obtain rather than the absolute speed, and that our batching scheme is framework-universal and superior speed might be obtained by combining our scheme with alternative frameworks or languages (for example, the torch C++ interface).", "The dependency parsing results are shown in Table 2.", "Our implementation is able to yield better test set performance than that reported in Ballesteros et al.", "(2017) for all batch size configurations except 256, where we observe a modest performance loss.", "Like Goyal et al.", "(2017) ; Keskar et al.", "(2016) ; Masters and Luschi (2018) , we initially observed more significant test-time performance deterioration (around 1% absolute difference) for models trained without learning rate warmup, and concurring with the findings in Goyal et al.", "(2017) , we find warmup very helpful for stabilizing largebatch training.", "We did not run experiments with batch size below 8 as they are too slow due to 4 Measured on one core of an Intel Xeon E7-4830 CPU.", "Results Python's inherent performance issue.", "Related Work DyNet has support for automatic minibatching (Neubig et al., 2017b) , which figures out what computation is able to be batched by traversing the computation graph to find homogeneous computations.", "While we cannot directly compare with that framework's automatic batching solution for StackLSTM 5 , we can draw a loose comparison to the results reported in that paper for BiLSTM transition-based parsing (Kiperwasser and Goldberg, 2016) .", "Comparing batch size of 64 to batch size of 1, they obtained a 3.64x speed-up on CPU and 2.73x speed-up on Tesla K80 GPU, while our architecture-specific manual batching scheme obtained 60.8x speed-up.", "The main reason for this difference is that their graph-traversing automatic batching scheme carries a much larger overhead compared to our manual batching approach.", "Another toolkit that supports automatic minibatching is Matchbox 6 , which operates by analyzing the single-instance model definition and deterministically convert the operations into their minibatched counterparts.", "While such mechanism eliminated the need to traverse the whole computation graph, it cannot homogenize the operations in each branch of if.", "Instead, it needs to perform each operation separately and apply masking on the result, while our method does not require any masking.", "Unfortunately we are also not able to compare with the toolkit at the time of this work as it lacks support for several operations we need.", "Similar to the spirit of our work, Bowman et al.", "(2016) attempted to parallelize StackLSTM by using Thin-stack, a data structure that reduces the space complexity by storing all the intermediate stack top elements in a tensor and use a queue to control element access.", "However, thanks to Py-Torch, our implementation is not directly dependent on the notion of Thin-stack.", "Instead, when an element is popped from the stack, we simply shift the stack top pointer and potentially re-write the corresponding sub-tensor later.", "In other words, there is no need for us to directly maintain all the intermediate stack top elements, because in PyTorch, when the element in the stack is re-written, its underlying sub-tensor will not be destructed as there are still nodes in the computation graph that point to it.", "Hence, when performing back-propagation, the gradient is still able to flow back to the elements that are previously popped from the stack and their respective precedents.", "Hence, we are also effectively storing all the intermediate stack top elements only once.", "Besides, Bowman et al.", "(2016) didn't attempt to eliminate the conditional branches in the StackLSTM algorithm, which is the main algorithmic contribution of this work.", "Conclusion We propose a parallelizable version of StackLSTM that is able to fully exploit the GPU parallelism by performing minibatched training.", "Empirical results show that our parallelization scheme yields comparable performance to previous work, and our method scales up very linearly with the increasing batch size.", "Because our parallelization scheme is based on the observation made in section 1, we cannot incorporate batching for neither Arc-Standard transition system nor the token-level composition function proposed in Dyer et al.", "(2015) efficiently yet.", "We leave the parallelization of these architectures to future work.", "Our parallelization scheme makes it feasible to run large-data experiments for various tasks that requires large training data to perform well, such as RNNG-based syntax-aware neural machine translation (Eriguchi et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "3", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "StackLSTM", "Parallelizable StackLSTM", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-71#paper-1160#slide-4
StackLSTM
An LSTM whose states are stored in a stack Computation is conditioned on the stack operation
An LSTM whose states are stored in a stack Computation is conditioned on the stack operation
[]
GEM-SciDuet-train-71#paper-1160#slide-7
1160
Parallelizable Stack Long Short-Term Memory
Stack Long Short-Term Memory (StackL-STM) is useful for various applications such as parsing and string-to-tree neural machine translation, but it is also known to be notoriously difficult to parallelize for GPU training due to the fact that the computations are dependent on discrete operations. In this paper, we tackle this problem by utilizing state access patterns of StackLSTM to homogenize computations with regard to different discrete operations. Our parsing experiments show that the method scales up almost linearly with increasing batch size, and our parallelized PyTorch implementation trains significantly faster compared to the Dynet C++ implementation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Tree-structured representation of language has been successfully applied to various applications including dependency parsing (Dyer et al., 2015) , sentiment analysis (Socher et al., 2011) and neural machine translation (Eriguchi et al., 2017) .", "However, most of the neural network architectures used to build tree-structured representations are not able to exploit full parallelism of GPUs by minibatched training, as the computation that happens for each instance is conditioned on the input/output structures, and hence cannot be naïvely grouped together as a batch.", "This lack of parallelism is one of the major hurdles that prevent these representations from wider adoption practically (e.g., neural machine translation), as many natural language processing tasks currently require the ability to scale up to very large training corpora in order to reach state-of-theart performance.", "We seek to advance the state-of-the-art of this problem by proposing a parallelization scheme for one such network architecture, the Stack Long Short-Term Memory (StackLSTM) proposed in Dyer et al.", "(2015) .", "This architecture has been successfully applied to dependency parsing (Dyer et al., 2015 (Dyer et al., , 2016 Ballesteros et al., 2017) and syntax-aware neural machine translation (Eriguchi et al., 2017) in the previous research literature, but none of these research results were produced with minibatched training.", "We show that our parallelization scheme is feasible in practice by showing that it scales up near-linearly with increasing batch size, while reproducing a set of results reported in (Ballesteros et al., 2017) .", "StackLSTM StackLSTM (Dyer et al., 2015) is an LSTM architecture (Hochreiter and Schmidhuber, 1997) augmented with a stack H that stores some of the hidden states built in the past.", "Unlike traditional LSTMs that always build state h t from h t−1 , the states of StackLSTM are built from the head of the state stack H, maintained by a stack top pointer p(H).", "At each time step, StackLSTM takes a realvalued input vector together with an additional discrete operation on the stack, which determines what computation needs to be conducted and how the stack top pointer should be updated.", "Throughout this section, we index the input vector (e.g.", "word embeddings) x t using the time step t it is fed into the network, and hidden states in the stack h j using their position j in the stack H, j being defined as the 0-base index starting from the stack bottom.", "The set of input discrete actions typically contains at least Push and Pop operations.", "When these operations are taken as input, the corresponding computations on the StackLSTM are listed below: 1 (Dyer et al., 2015) .", "S0 and B0 refers to the token-level representation corresponding to the top element of the stack and buffer, while S1 and B1 refers to those that are second to the top.", "We use a different notation here to avoid confusion with the states in StackLSTM, which represent non-local information beyond token-level.", "Reflecting on the aforementioned discussion on parallelism, one should notice that StackLSTM falls into the category of neural network architectures that is difficult to perform minibatched training.", "This is caused by the fact that the computation performed by StackLSTM at each time step is dependent on the discrete input actions.", "The following section proposes a solution to this problem.", "Parallelizable StackLSTM Continuing the formulation in the previous section, we will start by discussing our proposed solution under the case where the set of discrete actions contains only Push and Pop operations; we then move on to discussion of the applicability of our proposed solution to the transition systems that are used for building representations for dependency trees.", "The first modification we perform to the Push and Pop operations above is to unify the pointer update of these operations as p(H) ← p(H) + op, where op is the input discrete operation that could either take the value +1 or -1 for Push and Pop operation.", "After this modification, we came to the following observations: Now, what remains to homogenize Push and Pop operations is conducting the extra computations needed for Push operation when Pop is fed in as well, while guaranteeing the correctness of the resulting hidden state both in the current time step and in the future.", "The next observation points out a way for this guarantee: Observation 2 In a StackLSTM, given the current stack top pointer position p(H), any hidden state h i where i > p(H) will not be read until it is overwritten by a Push operation.", "What follows from this observation is the guarantee that we can always safely overwrite hidden states h i that are indexed higher than the current stack top pointer, because it is known that any read operation on these states will happen after another overwrite.", "This allows us to do the extra computation anyway when Pop operation is fed, because the extra computation, especially updating h p(H)+1 , will not harm the validity of the hidden states at any time step.", "Algorithm 1 gives the final forward computation for the Parallelizable StackLSTM.", "Note that this algorithm does not contain any if-statements that depends on stack operations and hence is homogeneous when grouped into batches that are consisted of multiple operations trajectories.", "In transition systems (Nivre, 2008; Kuhlmann et al., 2011) used in real tasks (e.g., transition-based parsing) as shown in Table 1 , it should be noted that more than push and pop operations are needed for the StackLSTM.", "Fortunately, for Arc-Eager and h_prev ← h p(H) ; h ← LSTM(x t , h_prev); h p(H)+1 ← h; p(H) ← p(H) + op; return h p(H) ; Arc-Hybrid transition systems, we can simply add a hold operation, which is denoted by value 0 for the discrete operation input.", "For that reason, we will focus on parallelization of these two transition systems for this paper.", "It should be noted that both observations discussed above are still valid after adding the hold operation.", "Experiments Setup We implemented 2 the architecture described above in PyTorch (Paszke et al., 2017) .", "We implemented the batched stack as a float tensor wrapped in a non-leaf variable, thus enabling in-place operations on that variable.", "At each time step, the batched stack is queried/updated with a batch of stack head positions represented by an integer vector, an operation made possible by gather operation and advanced indexing.", "Due to this implementation choice, the stack size has to be determined at initialization time and cannot be dynamically grown.", "Nonetheless, a fixed stack size of 150 works for all the experiments we conducted.", "We use the dependency parsing task to evaluate the correctness and the scalability of our method.", "a test set accuracy of 97.47%.", "We use exactly the same pre-trained English word embedding as Dyer et al.", "(2015) .", "We use Adam (Kingma and Ba, 2014) as the optimization algorithm.", "Following Goyal et al.", "(2017) , we apply linear warmup to the learning rate with an initial value of τ = 5 × 10 −4 and total epoch number of 5.", "The target learning rate is set by τ multiplied by batch size, but capped at 0.02 because we find Adam to be unstable beyond that learning rate.", "After warmup, we reduce the learning rate by half every time there is no improvement for loss value on the development set (ReduceLROnPlateau).", "We clip all the gradient norms to 5.0 and apply a L 2 -regularization with weight 1 × 10 −6 .", "We started with the hyper-parameter choices in Dyer et al.", "(2015) but made some modifications based on the performance on the development set: we use hidden dimension 200 for all the LSTM units, 200 for the parser state representation before the final softmax layer, and embedding dimension 48 for the action embedding.", "We use Tesla K80 for all the experiments, in order to compare with Neubig et al.", "(2017b); Dyer et al.", "(2015) .", "We also use the same hyper-parameter setting as Dyer et al.", "(2015) for speed comparison experiments.", "All the speeds are measured by running through one training epoch and averaging.", "close to linear, which means there is very little overhead associated with our batching scheme.", "Quantitatively, according to Amdahl's Law (Amdahl, 1967) , the proportion of parallelized computations is 99.92% at batch size 64.", "We also compared our implementation with the implementation that comes with Dyer et al.", "(2015) , which is implemented in C++ with DyNet (Neubig et al., 2017a) .", "DyNet is known to be very optimized for CPU computations and hence their implementation is reasonably fast even without batching and GPU acceleration, as shown in Figure 1 .", "4 But we would like to point out that we focus on the speed-up we are able to obtain rather than the absolute speed, and that our batching scheme is framework-universal and superior speed might be obtained by combining our scheme with alternative frameworks or languages (for example, the torch C++ interface).", "The dependency parsing results are shown in Table 2.", "Our implementation is able to yield better test set performance than that reported in Ballesteros et al.", "(2017) for all batch size configurations except 256, where we observe a modest performance loss.", "Like Goyal et al.", "(2017) ; Keskar et al.", "(2016) ; Masters and Luschi (2018) , we initially observed more significant test-time performance deterioration (around 1% absolute difference) for models trained without learning rate warmup, and concurring with the findings in Goyal et al.", "(2017) , we find warmup very helpful for stabilizing largebatch training.", "We did not run experiments with batch size below 8 as they are too slow due to 4 Measured on one core of an Intel Xeon E7-4830 CPU.", "Results Python's inherent performance issue.", "Related Work DyNet has support for automatic minibatching (Neubig et al., 2017b) , which figures out what computation is able to be batched by traversing the computation graph to find homogeneous computations.", "While we cannot directly compare with that framework's automatic batching solution for StackLSTM 5 , we can draw a loose comparison to the results reported in that paper for BiLSTM transition-based parsing (Kiperwasser and Goldberg, 2016) .", "Comparing batch size of 64 to batch size of 1, they obtained a 3.64x speed-up on CPU and 2.73x speed-up on Tesla K80 GPU, while our architecture-specific manual batching scheme obtained 60.8x speed-up.", "The main reason for this difference is that their graph-traversing automatic batching scheme carries a much larger overhead compared to our manual batching approach.", "Another toolkit that supports automatic minibatching is Matchbox 6 , which operates by analyzing the single-instance model definition and deterministically convert the operations into their minibatched counterparts.", "While such mechanism eliminated the need to traverse the whole computation graph, it cannot homogenize the operations in each branch of if.", "Instead, it needs to perform each operation separately and apply masking on the result, while our method does not require any masking.", "Unfortunately we are also not able to compare with the toolkit at the time of this work as it lacks support for several operations we need.", "Similar to the spirit of our work, Bowman et al.", "(2016) attempted to parallelize StackLSTM by using Thin-stack, a data structure that reduces the space complexity by storing all the intermediate stack top elements in a tensor and use a queue to control element access.", "However, thanks to Py-Torch, our implementation is not directly dependent on the notion of Thin-stack.", "Instead, when an element is popped from the stack, we simply shift the stack top pointer and potentially re-write the corresponding sub-tensor later.", "In other words, there is no need for us to directly maintain all the intermediate stack top elements, because in PyTorch, when the element in the stack is re-written, its underlying sub-tensor will not be destructed as there are still nodes in the computation graph that point to it.", "Hence, when performing back-propagation, the gradient is still able to flow back to the elements that are previously popped from the stack and their respective precedents.", "Hence, we are also effectively storing all the intermediate stack top elements only once.", "Besides, Bowman et al.", "(2016) didn't attempt to eliminate the conditional branches in the StackLSTM algorithm, which is the main algorithmic contribution of this work.", "Conclusion We propose a parallelizable version of StackLSTM that is able to fully exploit the GPU parallelism by performing minibatched training.", "Empirical results show that our parallelization scheme yields comparable performance to previous work, and our method scales up very linearly with the increasing batch size.", "Because our parallelization scheme is based on the observation made in section 1, we cannot incorporate batching for neither Arc-Standard transition system nor the token-level composition function proposed in Dyer et al.", "(2015) efficiently yet.", "We leave the parallelization of these architectures to future work.", "Our parallelization scheme makes it feasible to run large-data experiments for various tasks that requires large training data to perform well, such as RNNG-based syntax-aware neural machine translation (Eriguchi et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "3", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "StackLSTM", "Parallelizable StackLSTM", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-71#paper-1160#slide-7
Benchmark
Transition-based dependency parsing on Stanford Dependency Treebank PyTorch, Single K80 GPU
Transition-based dependency parsing on Stanford Dependency Treebank PyTorch, Single K80 GPU
[]
GEM-SciDuet-train-71#paper-1160#slide-8
1160
Parallelizable Stack Long Short-Term Memory
Stack Long Short-Term Memory (StackL-STM) is useful for various applications such as parsing and string-to-tree neural machine translation, but it is also known to be notoriously difficult to parallelize for GPU training due to the fact that the computations are dependent on discrete operations. In this paper, we tackle this problem by utilizing state access patterns of StackLSTM to homogenize computations with regard to different discrete operations. Our parsing experiments show that the method scales up almost linearly with increasing batch size, and our parallelized PyTorch implementation trains significantly faster compared to the Dynet C++ implementation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Tree-structured representation of language has been successfully applied to various applications including dependency parsing (Dyer et al., 2015) , sentiment analysis (Socher et al., 2011) and neural machine translation (Eriguchi et al., 2017) .", "However, most of the neural network architectures used to build tree-structured representations are not able to exploit full parallelism of GPUs by minibatched training, as the computation that happens for each instance is conditioned on the input/output structures, and hence cannot be naïvely grouped together as a batch.", "This lack of parallelism is one of the major hurdles that prevent these representations from wider adoption practically (e.g., neural machine translation), as many natural language processing tasks currently require the ability to scale up to very large training corpora in order to reach state-of-theart performance.", "We seek to advance the state-of-the-art of this problem by proposing a parallelization scheme for one such network architecture, the Stack Long Short-Term Memory (StackLSTM) proposed in Dyer et al.", "(2015) .", "This architecture has been successfully applied to dependency parsing (Dyer et al., 2015 (Dyer et al., , 2016 Ballesteros et al., 2017) and syntax-aware neural machine translation (Eriguchi et al., 2017) in the previous research literature, but none of these research results were produced with minibatched training.", "We show that our parallelization scheme is feasible in practice by showing that it scales up near-linearly with increasing batch size, while reproducing a set of results reported in (Ballesteros et al., 2017) .", "StackLSTM StackLSTM (Dyer et al., 2015) is an LSTM architecture (Hochreiter and Schmidhuber, 1997) augmented with a stack H that stores some of the hidden states built in the past.", "Unlike traditional LSTMs that always build state h t from h t−1 , the states of StackLSTM are built from the head of the state stack H, maintained by a stack top pointer p(H).", "At each time step, StackLSTM takes a realvalued input vector together with an additional discrete operation on the stack, which determines what computation needs to be conducted and how the stack top pointer should be updated.", "Throughout this section, we index the input vector (e.g.", "word embeddings) x t using the time step t it is fed into the network, and hidden states in the stack h j using their position j in the stack H, j being defined as the 0-base index starting from the stack bottom.", "The set of input discrete actions typically contains at least Push and Pop operations.", "When these operations are taken as input, the corresponding computations on the StackLSTM are listed below: 1 (Dyer et al., 2015) .", "S0 and B0 refers to the token-level representation corresponding to the top element of the stack and buffer, while S1 and B1 refers to those that are second to the top.", "We use a different notation here to avoid confusion with the states in StackLSTM, which represent non-local information beyond token-level.", "Reflecting on the aforementioned discussion on parallelism, one should notice that StackLSTM falls into the category of neural network architectures that is difficult to perform minibatched training.", "This is caused by the fact that the computation performed by StackLSTM at each time step is dependent on the discrete input actions.", "The following section proposes a solution to this problem.", "Parallelizable StackLSTM Continuing the formulation in the previous section, we will start by discussing our proposed solution under the case where the set of discrete actions contains only Push and Pop operations; we then move on to discussion of the applicability of our proposed solution to the transition systems that are used for building representations for dependency trees.", "The first modification we perform to the Push and Pop operations above is to unify the pointer update of these operations as p(H) ← p(H) + op, where op is the input discrete operation that could either take the value +1 or -1 for Push and Pop operation.", "After this modification, we came to the following observations: Now, what remains to homogenize Push and Pop operations is conducting the extra computations needed for Push operation when Pop is fed in as well, while guaranteeing the correctness of the resulting hidden state both in the current time step and in the future.", "The next observation points out a way for this guarantee: Observation 2 In a StackLSTM, given the current stack top pointer position p(H), any hidden state h i where i > p(H) will not be read until it is overwritten by a Push operation.", "What follows from this observation is the guarantee that we can always safely overwrite hidden states h i that are indexed higher than the current stack top pointer, because it is known that any read operation on these states will happen after another overwrite.", "This allows us to do the extra computation anyway when Pop operation is fed, because the extra computation, especially updating h p(H)+1 , will not harm the validity of the hidden states at any time step.", "Algorithm 1 gives the final forward computation for the Parallelizable StackLSTM.", "Note that this algorithm does not contain any if-statements that depends on stack operations and hence is homogeneous when grouped into batches that are consisted of multiple operations trajectories.", "In transition systems (Nivre, 2008; Kuhlmann et al., 2011) used in real tasks (e.g., transition-based parsing) as shown in Table 1 , it should be noted that more than push and pop operations are needed for the StackLSTM.", "Fortunately, for Arc-Eager and h_prev ← h p(H) ; h ← LSTM(x t , h_prev); h p(H)+1 ← h; p(H) ← p(H) + op; return h p(H) ; Arc-Hybrid transition systems, we can simply add a hold operation, which is denoted by value 0 for the discrete operation input.", "For that reason, we will focus on parallelization of these two transition systems for this paper.", "It should be noted that both observations discussed above are still valid after adding the hold operation.", "Experiments Setup We implemented 2 the architecture described above in PyTorch (Paszke et al., 2017) .", "We implemented the batched stack as a float tensor wrapped in a non-leaf variable, thus enabling in-place operations on that variable.", "At each time step, the batched stack is queried/updated with a batch of stack head positions represented by an integer vector, an operation made possible by gather operation and advanced indexing.", "Due to this implementation choice, the stack size has to be determined at initialization time and cannot be dynamically grown.", "Nonetheless, a fixed stack size of 150 works for all the experiments we conducted.", "We use the dependency parsing task to evaluate the correctness and the scalability of our method.", "a test set accuracy of 97.47%.", "We use exactly the same pre-trained English word embedding as Dyer et al.", "(2015) .", "We use Adam (Kingma and Ba, 2014) as the optimization algorithm.", "Following Goyal et al.", "(2017) , we apply linear warmup to the learning rate with an initial value of τ = 5 × 10 −4 and total epoch number of 5.", "The target learning rate is set by τ multiplied by batch size, but capped at 0.02 because we find Adam to be unstable beyond that learning rate.", "After warmup, we reduce the learning rate by half every time there is no improvement for loss value on the development set (ReduceLROnPlateau).", "We clip all the gradient norms to 5.0 and apply a L 2 -regularization with weight 1 × 10 −6 .", "We started with the hyper-parameter choices in Dyer et al.", "(2015) but made some modifications based on the performance on the development set: we use hidden dimension 200 for all the LSTM units, 200 for the parser state representation before the final softmax layer, and embedding dimension 48 for the action embedding.", "We use Tesla K80 for all the experiments, in order to compare with Neubig et al.", "(2017b); Dyer et al.", "(2015) .", "We also use the same hyper-parameter setting as Dyer et al.", "(2015) for speed comparison experiments.", "All the speeds are measured by running through one training epoch and averaging.", "close to linear, which means there is very little overhead associated with our batching scheme.", "Quantitatively, according to Amdahl's Law (Amdahl, 1967) , the proportion of parallelized computations is 99.92% at batch size 64.", "We also compared our implementation with the implementation that comes with Dyer et al.", "(2015) , which is implemented in C++ with DyNet (Neubig et al., 2017a) .", "DyNet is known to be very optimized for CPU computations and hence their implementation is reasonably fast even without batching and GPU acceleration, as shown in Figure 1 .", "4 But we would like to point out that we focus on the speed-up we are able to obtain rather than the absolute speed, and that our batching scheme is framework-universal and superior speed might be obtained by combining our scheme with alternative frameworks or languages (for example, the torch C++ interface).", "The dependency parsing results are shown in Table 2.", "Our implementation is able to yield better test set performance than that reported in Ballesteros et al.", "(2017) for all batch size configurations except 256, where we observe a modest performance loss.", "Like Goyal et al.", "(2017) ; Keskar et al.", "(2016) ; Masters and Luschi (2018) , we initially observed more significant test-time performance deterioration (around 1% absolute difference) for models trained without learning rate warmup, and concurring with the findings in Goyal et al.", "(2017) , we find warmup very helpful for stabilizing largebatch training.", "We did not run experiments with batch size below 8 as they are too slow due to 4 Measured on one core of an Intel Xeon E7-4830 CPU.", "Results Python's inherent performance issue.", "Related Work DyNet has support for automatic minibatching (Neubig et al., 2017b) , which figures out what computation is able to be batched by traversing the computation graph to find homogeneous computations.", "While we cannot directly compare with that framework's automatic batching solution for StackLSTM 5 , we can draw a loose comparison to the results reported in that paper for BiLSTM transition-based parsing (Kiperwasser and Goldberg, 2016) .", "Comparing batch size of 64 to batch size of 1, they obtained a 3.64x speed-up on CPU and 2.73x speed-up on Tesla K80 GPU, while our architecture-specific manual batching scheme obtained 60.8x speed-up.", "The main reason for this difference is that their graph-traversing automatic batching scheme carries a much larger overhead compared to our manual batching approach.", "Another toolkit that supports automatic minibatching is Matchbox 6 , which operates by analyzing the single-instance model definition and deterministically convert the operations into their minibatched counterparts.", "While such mechanism eliminated the need to traverse the whole computation graph, it cannot homogenize the operations in each branch of if.", "Instead, it needs to perform each operation separately and apply masking on the result, while our method does not require any masking.", "Unfortunately we are also not able to compare with the toolkit at the time of this work as it lacks support for several operations we need.", "Similar to the spirit of our work, Bowman et al.", "(2016) attempted to parallelize StackLSTM by using Thin-stack, a data structure that reduces the space complexity by storing all the intermediate stack top elements in a tensor and use a queue to control element access.", "However, thanks to Py-Torch, our implementation is not directly dependent on the notion of Thin-stack.", "Instead, when an element is popped from the stack, we simply shift the stack top pointer and potentially re-write the corresponding sub-tensor later.", "In other words, there is no need for us to directly maintain all the intermediate stack top elements, because in PyTorch, when the element in the stack is re-written, its underlying sub-tensor will not be destructed as there are still nodes in the computation graph that point to it.", "Hence, when performing back-propagation, the gradient is still able to flow back to the elements that are previously popped from the stack and their respective precedents.", "Hence, we are also effectively storing all the intermediate stack top elements only once.", "Besides, Bowman et al.", "(2016) didn't attempt to eliminate the conditional branches in the StackLSTM algorithm, which is the main algorithmic contribution of this work.", "Conclusion We propose a parallelizable version of StackLSTM that is able to fully exploit the GPU parallelism by performing minibatched training.", "Empirical results show that our parallelization scheme yields comparable performance to previous work, and our method scales up very linearly with the increasing batch size.", "Because our parallelization scheme is based on the observation made in section 1, we cannot incorporate batching for neither Arc-Standard transition system nor the token-level composition function proposed in Dyer et al.", "(2015) efficiently yet.", "We leave the parallelization of these architectures to future work.", "Our parallelization scheme makes it feasible to run large-data experiments for various tasks that requires large training data to perform well, such as RNNG-based syntax-aware neural machine translation (Eriguchi et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "3", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "StackLSTM", "Parallelizable StackLSTM", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-71#paper-1160#slide-8
Hyperparameters
Largely following Dyer et al. (2015); Ballesteros et Adam w/ ReduceLROnPlateau and warmup Arc-Hybrid w/o composition function action embedding) perform better
Largely following Dyer et al. (2015); Ballesteros et Adam w/ ReduceLROnPlateau and warmup Arc-Hybrid w/o composition function action embedding) perform better
[]
GEM-SciDuet-train-71#paper-1160#slide-11
1160
Parallelizable Stack Long Short-Term Memory
Stack Long Short-Term Memory (StackL-STM) is useful for various applications such as parsing and string-to-tree neural machine translation, but it is also known to be notoriously difficult to parallelize for GPU training due to the fact that the computations are dependent on discrete operations. In this paper, we tackle this problem by utilizing state access patterns of StackLSTM to homogenize computations with regard to different discrete operations. Our parsing experiments show that the method scales up almost linearly with increasing batch size, and our parallelized PyTorch implementation trains significantly faster compared to the Dynet C++ implementation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Tree-structured representation of language has been successfully applied to various applications including dependency parsing (Dyer et al., 2015) , sentiment analysis (Socher et al., 2011) and neural machine translation (Eriguchi et al., 2017) .", "However, most of the neural network architectures used to build tree-structured representations are not able to exploit full parallelism of GPUs by minibatched training, as the computation that happens for each instance is conditioned on the input/output structures, and hence cannot be naïvely grouped together as a batch.", "This lack of parallelism is one of the major hurdles that prevent these representations from wider adoption practically (e.g., neural machine translation), as many natural language processing tasks currently require the ability to scale up to very large training corpora in order to reach state-of-theart performance.", "We seek to advance the state-of-the-art of this problem by proposing a parallelization scheme for one such network architecture, the Stack Long Short-Term Memory (StackLSTM) proposed in Dyer et al.", "(2015) .", "This architecture has been successfully applied to dependency parsing (Dyer et al., 2015 (Dyer et al., , 2016 Ballesteros et al., 2017) and syntax-aware neural machine translation (Eriguchi et al., 2017) in the previous research literature, but none of these research results were produced with minibatched training.", "We show that our parallelization scheme is feasible in practice by showing that it scales up near-linearly with increasing batch size, while reproducing a set of results reported in (Ballesteros et al., 2017) .", "StackLSTM StackLSTM (Dyer et al., 2015) is an LSTM architecture (Hochreiter and Schmidhuber, 1997) augmented with a stack H that stores some of the hidden states built in the past.", "Unlike traditional LSTMs that always build state h t from h t−1 , the states of StackLSTM are built from the head of the state stack H, maintained by a stack top pointer p(H).", "At each time step, StackLSTM takes a realvalued input vector together with an additional discrete operation on the stack, which determines what computation needs to be conducted and how the stack top pointer should be updated.", "Throughout this section, we index the input vector (e.g.", "word embeddings) x t using the time step t it is fed into the network, and hidden states in the stack h j using their position j in the stack H, j being defined as the 0-base index starting from the stack bottom.", "The set of input discrete actions typically contains at least Push and Pop operations.", "When these operations are taken as input, the corresponding computations on the StackLSTM are listed below: 1 (Dyer et al., 2015) .", "S0 and B0 refers to the token-level representation corresponding to the top element of the stack and buffer, while S1 and B1 refers to those that are second to the top.", "We use a different notation here to avoid confusion with the states in StackLSTM, which represent non-local information beyond token-level.", "Reflecting on the aforementioned discussion on parallelism, one should notice that StackLSTM falls into the category of neural network architectures that is difficult to perform minibatched training.", "This is caused by the fact that the computation performed by StackLSTM at each time step is dependent on the discrete input actions.", "The following section proposes a solution to this problem.", "Parallelizable StackLSTM Continuing the formulation in the previous section, we will start by discussing our proposed solution under the case where the set of discrete actions contains only Push and Pop operations; we then move on to discussion of the applicability of our proposed solution to the transition systems that are used for building representations for dependency trees.", "The first modification we perform to the Push and Pop operations above is to unify the pointer update of these operations as p(H) ← p(H) + op, where op is the input discrete operation that could either take the value +1 or -1 for Push and Pop operation.", "After this modification, we came to the following observations: Now, what remains to homogenize Push and Pop operations is conducting the extra computations needed for Push operation when Pop is fed in as well, while guaranteeing the correctness of the resulting hidden state both in the current time step and in the future.", "The next observation points out a way for this guarantee: Observation 2 In a StackLSTM, given the current stack top pointer position p(H), any hidden state h i where i > p(H) will not be read until it is overwritten by a Push operation.", "What follows from this observation is the guarantee that we can always safely overwrite hidden states h i that are indexed higher than the current stack top pointer, because it is known that any read operation on these states will happen after another overwrite.", "This allows us to do the extra computation anyway when Pop operation is fed, because the extra computation, especially updating h p(H)+1 , will not harm the validity of the hidden states at any time step.", "Algorithm 1 gives the final forward computation for the Parallelizable StackLSTM.", "Note that this algorithm does not contain any if-statements that depends on stack operations and hence is homogeneous when grouped into batches that are consisted of multiple operations trajectories.", "In transition systems (Nivre, 2008; Kuhlmann et al., 2011) used in real tasks (e.g., transition-based parsing) as shown in Table 1 , it should be noted that more than push and pop operations are needed for the StackLSTM.", "Fortunately, for Arc-Eager and h_prev ← h p(H) ; h ← LSTM(x t , h_prev); h p(H)+1 ← h; p(H) ← p(H) + op; return h p(H) ; Arc-Hybrid transition systems, we can simply add a hold operation, which is denoted by value 0 for the discrete operation input.", "For that reason, we will focus on parallelization of these two transition systems for this paper.", "It should be noted that both observations discussed above are still valid after adding the hold operation.", "Experiments Setup We implemented 2 the architecture described above in PyTorch (Paszke et al., 2017) .", "We implemented the batched stack as a float tensor wrapped in a non-leaf variable, thus enabling in-place operations on that variable.", "At each time step, the batched stack is queried/updated with a batch of stack head positions represented by an integer vector, an operation made possible by gather operation and advanced indexing.", "Due to this implementation choice, the stack size has to be determined at initialization time and cannot be dynamically grown.", "Nonetheless, a fixed stack size of 150 works for all the experiments we conducted.", "We use the dependency parsing task to evaluate the correctness and the scalability of our method.", "a test set accuracy of 97.47%.", "We use exactly the same pre-trained English word embedding as Dyer et al.", "(2015) .", "We use Adam (Kingma and Ba, 2014) as the optimization algorithm.", "Following Goyal et al.", "(2017) , we apply linear warmup to the learning rate with an initial value of τ = 5 × 10 −4 and total epoch number of 5.", "The target learning rate is set by τ multiplied by batch size, but capped at 0.02 because we find Adam to be unstable beyond that learning rate.", "After warmup, we reduce the learning rate by half every time there is no improvement for loss value on the development set (ReduceLROnPlateau).", "We clip all the gradient norms to 5.0 and apply a L 2 -regularization with weight 1 × 10 −6 .", "We started with the hyper-parameter choices in Dyer et al.", "(2015) but made some modifications based on the performance on the development set: we use hidden dimension 200 for all the LSTM units, 200 for the parser state representation before the final softmax layer, and embedding dimension 48 for the action embedding.", "We use Tesla K80 for all the experiments, in order to compare with Neubig et al.", "(2017b); Dyer et al.", "(2015) .", "We also use the same hyper-parameter setting as Dyer et al.", "(2015) for speed comparison experiments.", "All the speeds are measured by running through one training epoch and averaging.", "close to linear, which means there is very little overhead associated with our batching scheme.", "Quantitatively, according to Amdahl's Law (Amdahl, 1967) , the proportion of parallelized computations is 99.92% at batch size 64.", "We also compared our implementation with the implementation that comes with Dyer et al.", "(2015) , which is implemented in C++ with DyNet (Neubig et al., 2017a) .", "DyNet is known to be very optimized for CPU computations and hence their implementation is reasonably fast even without batching and GPU acceleration, as shown in Figure 1 .", "4 But we would like to point out that we focus on the speed-up we are able to obtain rather than the absolute speed, and that our batching scheme is framework-universal and superior speed might be obtained by combining our scheme with alternative frameworks or languages (for example, the torch C++ interface).", "The dependency parsing results are shown in Table 2.", "Our implementation is able to yield better test set performance than that reported in Ballesteros et al.", "(2017) for all batch size configurations except 256, where we observe a modest performance loss.", "Like Goyal et al.", "(2017) ; Keskar et al.", "(2016) ; Masters and Luschi (2018) , we initially observed more significant test-time performance deterioration (around 1% absolute difference) for models trained without learning rate warmup, and concurring with the findings in Goyal et al.", "(2017) , we find warmup very helpful for stabilizing largebatch training.", "We did not run experiments with batch size below 8 as they are too slow due to 4 Measured on one core of an Intel Xeon E7-4830 CPU.", "Results Python's inherent performance issue.", "Related Work DyNet has support for automatic minibatching (Neubig et al., 2017b) , which figures out what computation is able to be batched by traversing the computation graph to find homogeneous computations.", "While we cannot directly compare with that framework's automatic batching solution for StackLSTM 5 , we can draw a loose comparison to the results reported in that paper for BiLSTM transition-based parsing (Kiperwasser and Goldberg, 2016) .", "Comparing batch size of 64 to batch size of 1, they obtained a 3.64x speed-up on CPU and 2.73x speed-up on Tesla K80 GPU, while our architecture-specific manual batching scheme obtained 60.8x speed-up.", "The main reason for this difference is that their graph-traversing automatic batching scheme carries a much larger overhead compared to our manual batching approach.", "Another toolkit that supports automatic minibatching is Matchbox 6 , which operates by analyzing the single-instance model definition and deterministically convert the operations into their minibatched counterparts.", "While such mechanism eliminated the need to traverse the whole computation graph, it cannot homogenize the operations in each branch of if.", "Instead, it needs to perform each operation separately and apply masking on the result, while our method does not require any masking.", "Unfortunately we are also not able to compare with the toolkit at the time of this work as it lacks support for several operations we need.", "Similar to the spirit of our work, Bowman et al.", "(2016) attempted to parallelize StackLSTM by using Thin-stack, a data structure that reduces the space complexity by storing all the intermediate stack top elements in a tensor and use a queue to control element access.", "However, thanks to Py-Torch, our implementation is not directly dependent on the notion of Thin-stack.", "Instead, when an element is popped from the stack, we simply shift the stack top pointer and potentially re-write the corresponding sub-tensor later.", "In other words, there is no need for us to directly maintain all the intermediate stack top elements, because in PyTorch, when the element in the stack is re-written, its underlying sub-tensor will not be destructed as there are still nodes in the computation graph that point to it.", "Hence, when performing back-propagation, the gradient is still able to flow back to the elements that are previously popped from the stack and their respective precedents.", "Hence, we are also effectively storing all the intermediate stack top elements only once.", "Besides, Bowman et al.", "(2016) didn't attempt to eliminate the conditional branches in the StackLSTM algorithm, which is the main algorithmic contribution of this work.", "Conclusion We propose a parallelizable version of StackLSTM that is able to fully exploit the GPU parallelism by performing minibatched training.", "Empirical results show that our parallelization scheme yields comparable performance to previous work, and our method scales up very linearly with the increasing batch size.", "Because our parallelization scheme is based on the observation made in section 1, we cannot incorporate batching for neither Arc-Standard transition system nor the token-level composition function proposed in Dyer et al.", "(2015) efficiently yet.", "We leave the parallelization of these architectures to future work.", "Our parallelization scheme makes it feasible to run large-data experiments for various tasks that requires large training data to perform well, such as RNNG-based syntax-aware neural machine translation (Eriguchi et al., 2017) ." ] }
{ "paper_header_number": [ "1", "2", "3", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "StackLSTM", "Parallelizable StackLSTM", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-71#paper-1160#slide-11
Conclusion
We propose a parallelization scheme for StackLSTM architecture. Together with a different optimizer, we are able to train parsers of comparable performance within 1 hour.
We propose a parallelization scheme for StackLSTM architecture. Together with a different optimizer, we are able to train parsers of comparable performance within 1 hour.
[]
GEM-SciDuet-train-72#paper-1163#slide-0
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-0
2 min summary
Probabilistic FastText FastText + Gaussian Mixture Embeddings
Probabilistic FastText FastText + Gaussian Mixture Embeddings
[]
GEM-SciDuet-train-72#paper-1163#slide-1
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-1
Probabilistic fasttext
L[cool] = COOL f(cool) COOL ~rock,0 music L[coolzz] = f(coolzz) COOLZZ dictionary-based embeddings character-based probabilistic embeddings w2gm FastText PFT ~rock,1 Spearman Correlation Word Component Nearest neighbors (cosine similarity) rock rocks:0, rocky:0, mudrock:0, rockscape:0 rock punk:0, punk-rock:0, indie:0, pop-rock:0 Word Component Nearest neighbors (cosine similarity) Word Component / Meaning Nearest neighbors (English Translation) rock rocks:0, rocky:0, mudrock:0, rockscape:0 secondo 0 / 2nd Secondo (2nd), terzo (3rd) , quinto (5th), primo (first) rock punk:0, punk-rock:0, indie:0, pop-rock:0 secondo 1 / according to conformit (compliance), attenendosi (following), cui (which)
L[cool] = COOL f(cool) COOL ~rock,0 music L[coolzz] = f(coolzz) COOLZZ dictionary-based embeddings character-based probabilistic embeddings w2gm FastText PFT ~rock,1 Spearman Correlation Word Component Nearest neighbors (cosine similarity) rock rocks:0, rocky:0, mudrock:0, rockscape:0 rock punk:0, punk-rock:0, indie:0, pop-rock:0 Word Component Nearest neighbors (cosine similarity) Word Component / Meaning Nearest neighbors (English Translation) rock rocks:0, rocky:0, mudrock:0, rockscape:0 secondo 0 / 2nd Secondo (2nd), terzo (3rd) , quinto (5th), primo (first) rock punk:0, punk-rock:0, indie:0, pop-rock:0 secondo 1 / according to conformit (compliance), attenendosi (following), cui (which)
[]
GEM-SciDuet-train-72#paper-1163#slide-3
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-3
Word embeddings
one-hot vector dense representation size of vocabulary dimension
one-hot vector dense representation size of vocabulary dimension
[]
GEM-SciDuet-train-72#paper-1163#slide-4
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-4
Dense representation of words
vindicates vindicate exculpate absolve exonerate Country and Gap tal Vectors Projected by PCA China - Beijing ~ Japan - Tokyo
vindicates vindicate exculpate absolve exonerate Country and Gap tal Vectors Projected by PCA China - Beijing ~ Japan - Tokyo
[]
GEM-SciDuet-train-72#paper-1163#slide-6
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-6
Similarity score energy between
vector space function space
vector space function space
[]
GEM-SciDuet-train-72#paper-1163#slide-7
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-7
Energy of two gaussian mixtures
total energy = weighted sum of pairwise partial energies
total energy = weighted sum of pairwise partial energies
[]
GEM-SciDuet-train-72#paper-1163#slide-8
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-8
Word sampling
I like that rock band
I like that rock band
[]
GEM-SciDuet-train-72#paper-1163#slide-9
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-9
Loss function
word: w context word: c rock band high E(w,c) word: w negative context: c rock dog low E(w,c)
word: w context word: c rock band high E(w,c) word: w negative context: c rock dog low E(w,c)
[]
GEM-SciDuet-train-72#paper-1163#slide-12
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-12
Spearman correlations
W O R D S I M D ATA S E T S FA S T T E X T W 2 G M P F T- G M
W O R D S I M D ATA S E T S FA S T T E X T W 2 G M P F T- G M
[]
GEM-SciDuet-train-72#paper-1163#slide-13
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-13
Comparison with other multi prototype embeddings
TIAN MAXSIM 63.6 - PFT performs better W2GM MAXSIM : 62.7 than other multi- NEELAKANTAN AVGSIM | 64.2 prototype PFT-GM MAXSIM 63. embeddings on ee Pe SCWS, a benchmark CHEN-M AVGSIM 200 66.2 ar for word similarity W2GM MAXSIM 200 65.5 with multiple NEELAKANTAN AVGSIM | 300 67. meanings. Table 3: Spearmans Correlation p x 100 on word similarity dataset SCWS.
TIAN MAXSIM 63.6 - PFT performs better W2GM MAXSIM : 62.7 than other multi- NEELAKANTAN AVGSIM | 64.2 prototype PFT-GM MAXSIM 63. embeddings on ee Pe SCWS, a benchmark CHEN-M AVGSIM 200 66.2 ar for word similarity W2GM MAXSIM 200 65.5 with multiple NEELAKANTAN AVGSIM | 300 67. meanings. Table 3: Spearmans Correlation p x 100 on word similarity dataset SCWS.
[]
GEM-SciDuet-train-72#paper-1163#slide-14
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-14
Foreign language embeddings
Word Meaning Nearest Neighbors (IT) secondo according to conformit (compliance), attenendosi (following), cui (which), conformemente (accordance with) (IT) porta lead, bring portano (lead), conduce (leads), portano, porter, portando (bring), costringe (forces) (IT) porta door porte (doors), finestrella (window), finestra (window), portone (doorway), serratura (door lock) (FR) voile veil voiles (veil), voiler (veil), voilent (veil), voilement, foulard (scarf), voils (veils), voilant (veiling) (FR) voile sail catamaran (catamaran), driveur (driver), nautiques (water), Voile (sail), driveurs (drivers) (FR) temps weather brouillard (fog), orageuses (stormy), nuageux (cloudy) (FR) femps time mi-temps (half-time), partiel (partial), Temps (time), annualis (annualized), horaires (schedule) (FR) voler steal envoler (fly), voleuse (thief), cambrioler (burgle), voleur (thief), violer (violate), picoler (tipple) (FR) voler fly airs (air), vol (flight), volent (fly), envoler (flying), atterrir (land) Table 5: Nearest neighbors of polysemies based on our foreign language PFT-GM models. Table 4: Word similarity evaluation on foreign languages.
Word Meaning Nearest Neighbors (IT) secondo according to conformit (compliance), attenendosi (following), cui (which), conformemente (accordance with) (IT) porta lead, bring portano (lead), conduce (leads), portano, porter, portando (bring), costringe (forces) (IT) porta door porte (doors), finestrella (window), finestra (window), portone (doorway), serratura (door lock) (FR) voile veil voiles (veil), voiler (veil), voilent (veil), voilement, foulard (scarf), voils (veils), voilant (veiling) (FR) voile sail catamaran (catamaran), driveur (driver), nautiques (water), Voile (sail), driveurs (drivers) (FR) temps weather brouillard (fog), orageuses (stormy), nuageux (cloudy) (FR) femps time mi-temps (half-time), partiel (partial), Temps (time), annualis (annualized), horaires (schedule) (FR) voler steal envoler (fly), voleuse (thief), cambrioler (burgle), voleur (thief), violer (violate), picoler (tipple) (FR) voler fly airs (air), vol (flight), volent (fly), envoler (flying), atterrir (land) Table 5: Nearest neighbors of polysemies based on our foreign language PFT-GM models. Table 4: Word similarity evaluation on foreign languages.
[]
GEM-SciDuet-train-72#paper-1163#slide-15
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-15
Future work multi lingual embeddings
Literature: align embeddings of many languages after training Use disentangled embeddings to disambiguate alignment
Literature: align embeddings of many languages after training Use disentangled embeddings to disambiguate alignment
[]
GEM-SciDuet-train-72#paper-1163#slide-16
1163
Probabilistic FastText for Multi-Sense Word Embeddings
We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191 ], "paper_content_text": [ "Introduction Word embeddings are foundational to natural language processing.", "In order to model language, we need word representations to contain as much semantic information as possible.", "Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a) , where words with similar meanings are mapped to nearby points in a vector space.", "Following the * Work done partly during internship at Amazon.", "seminal work of Mikolov et al.", "(2013a) , there have been numerous works looking to learn efficient word embeddings.", "One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words.", "To overcome this limitation, character-level word embeddings have been proposed.", "FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings.", "In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram.", "The benefit of this approach is that the training process can then share strength across words composed of common roots.", "For example, with individual representations for \"circum\" and \"navigation\", we can construct an informative representation for \"circumnavigation\", which would otherwise appear too infrequently to learn a dictionary-level embedding.", "In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as \"dogz\", and can share strength across different languages that share roots, e.g.", "Romance languages share latent roots.", "A different promising direction involves representing words with probability distributions, instead of point vectors.", "For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information.", "Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings.", "For example, the distribution for \"rock\" could have mass near the word \"jazz\" and \"pop\", but also \"stone\" and \"basalt\".", "Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word \"music\" can be learned to have a broad distribution, which encapsulates the distributions for \"jazz\" and \"rock\".", "In this paper, we propose Probabilistic Fast-Text (PFT), which provides probabilistic characterlevel representations of words.", "The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures.", "PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses.", "In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively.", "Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information.", "We also derive an efficient energybased max-margin training procedure for PFT.", "We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture).", "Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset.", "We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models.", "We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning.", "Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FAST-TEXT on many foreign word similarity datasets.", "Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies.", "In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017) .", "To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words.", "Our code and embeddings are publicly available.", "1 Related Work Early word embeddings which capture semantic information include Bengio et al.", "(2003) , Col-1 https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008 ), and Mikolov et al.", "(2011 ).", "Later, Mikolov et al.", "(2013a developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text.", "Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014) .", "Recently there has been a surge of interest in making dictionary-based word embeddings more flexible.", "This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016) , named entity recognition (Kuru et al., 2016) , and machine translation (Zhao and Zhang, 2016; Lee et al., 2017) , where unseen words are frequent and proper handling of these words can greatly improve the performance.", "These works focus on modeling subword information in neural networks for tasks such as language modeling.", "Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors.", "The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012) , or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014) .", "Neelakantan et al.", "(2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word.", "incorporates an external dataset WORDNET to learn sense vectors.", "We compare these models with our multimodal embeddings in Section 4.", "Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure.", "We describe the probabilistic subword representation in Section 3.1.", "We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3.", "We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5).", "Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components.", "That is, a word w is associated with a density function f ( x) = K i=1 p w,i N (x; µ w,i , Σ w,i ) where {µ w,i } K k=1 are the mean vectors and {Σ w,i } are the covariance matrices, and {p w,i } K k=1 are the component probabilities which sum to 1.", "The mean vectors of Gaussian components hold much of the semantic information in density embeddings.", "While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017) , the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus.", "We propose using subword structures to estimate the mean vectors.", "We outline the formulation below.", "For word w, we estimate the mean vector µ w with the average over n-gram vectors and its dictionary-level vector.", "That is, µ w = 1 |N G w | + 1   v w + g∈N Gw z g   (1) where z g is a vector associated with an n-gram g, v w is the dictionary representation of word w, and N G w is a set of n-grams of word w. Examples of 3,4-grams for a word \"beautiful\", including the beginning-of-word character ' ' and end-of-word character ' ', are: • 3-grams: be, bea, eau, aut, uti, tif, ful, ul • 4-grams: bea, beau .., iful ,ful This structure is similar to that of FASTTEXT (Bojanowski et al., 2016) ; however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words' probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector.", "Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFT-G) and Gaussian mixture probabilistic FASTTEXT (PFT-GM).", "In the Gaussian case, we represent each mean vector with a subword estimation.", "For the Gaussian mixture case, we represent one Gaussian component's mean vector with the subword structure whereas other components' mean vectors are dictionary-based.", "This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery.", "Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product.", "In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ·, · L 2 , which is called the expected likelihood kernel (Jebara et al., 2004) .", "We define the energy E(f, g) between two words f and g to be E(f, g) = log f, g L 2 = log f (x)g(x) dx.", "With Gaussian mixtures f (x) = K i=1 p i N (x; µ f,i , Σ f,i ) and g(x) = K i=1 q i N (x; µ g,i , Σ g,i ), K i=1 p i = 1, and K i=1 q i = 1 , the energy has a closed form: E(f, g) = log K j=1 K i=1 p i q j e ξ i,j (2) where ξ j,j is the partial energy which corresponds to the similarity between component i of the first word f and component j of the second word g. 2 ξ i,j ≡ log N (0; µ f,i − µ g,j , Σ f,i + Σ g,j ) = − 1 2 log det(Σ f,i + Σ g,j ) − D 2 log(2π) − 1 2 ( µ f,i − µ g,j ) (Σ f,i + Σ g,j ) −1 ( µ f,i − µ g,j ) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words.", "Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠ 0,1 ⇠ 0,0 ⇠ 1,1 ⇠ 1, Loss Function The model parameters that we seek to learn are v w for each word w and z g for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m − E(f, g) + E(f, n)] .", "(4) We describe how to sample words as well as its positive and negative contexts in Section 3.5.", "This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words.", "That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning.", "For instance, one density mode of \"star\" is close to the densities of \"celebrity\" and \"hollywood\" whereas another mode of \"star\" is near the densities of \"constellation\" and \"galaxy\".", "Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters.", "In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources.", "Using spherical covariances for each component, we can further simplify the energy function as follows: ξ i,j = − α 2 · ||µ f,i − µ g,j || 2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3.", "We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components.", "Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length .", "We also use a word sampling technique similar to Mikolov et al.", "(2013b) .", "This subsampling procedure selects words for training with lower probabilities if they appear frequently.", "This technique has an effect of reducing the importance of words such as 'the', 'a', 'to' which can be predominant in a text corpus but are not as meaningful as other less frequent words such as 'city', 'capital', 'animal', etc.", "In particular, word w has probability P (w) = 1 − t/f (w) where f (w) is the frequency of word w in the corpus and t is the frequency threshold.", "A negative context word is selected using a distribution P n (w) ∝ U (w) 3/4 where U (w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words.", "Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach.", "In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies.", "First, we describe the training details in Section 4.1.", "We provide qualitative evaluation in Section 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures.", "Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM).", "We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4.", "Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5.", "Training Details We train our models on both English and foreign language datasets.", "For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words.", "We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466.", "For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses.", "We note that our model should be applicable for other languages as well.", "We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009 ) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively.", "We use the same threshold, filtering out words that occur less than 5 times in each corpus.", "We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC.", "We adjust the hyperparameters on the English corpus and use them for foreign languages.", "Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5.", "We search for the optimal hyperparameters in a grid m ∈ {0.01, 0.1, 1, 10, 100} and α ∈ { 1 5×10 −3 , 1 10 −3 , 1 2×10 −4 , 1 1×10 −4 } on our English corpus.", "The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α.", "In particular, the learning rates used are γ = {10 −4 , 10 −5 , 10 −6 } for the respective α values.", "Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length = 10 and the subsampling threshold t = 10 −5 .", "Similar to the setup in FAST-TEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors.", "Qualitative Evaluation -Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors.", "Table 1 shows examples of polysemous words such as rock, star, and cell.", "Table 1 shows the nearest neighbors of polysemous words.", "We note that subword embeddings prefer words with overlapping characters as nearest neighbors.", "For instance, \"rock-y\", \"rockn\", and \"rock\" are both close to the word \"rock\".", "For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned.", "However, all words shown are in the top-100 nearest words.", "We observe the separation in meanings for the multi-component case; for instance, one component of the word \"bank\" corresponds to a financial bank whereas the other component corresponds to a river bank.", "The single-component case also has interesting behavior.", "We observe that the subword embeddings of polysemous words can represent both meanings.", "For instance, both \"lava-rock\" and \"rock-pop\" are among the closest words to \"rock\".", "Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014) , WS-353 (Finkelstein et al., 2002) , MEN-3k (Bruni et al., 2014) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , YP-130 (Yang and Powers, 2006) , MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012) , and RW-2k (Luong et al., 2013) .", "Each dataset contains a list of word pairs with a human score of how related or similar the two words are.", "We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set.", "We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness.", "We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012) , a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings.", "We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen-Word Co.", "Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.", ":0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.", "** star movie-stars, star-planet, starsailor, Star, starsign, cell/tumour, left/joined, leaving, left, right, right, left) and, leftsided, lefted, leftside erated by the embeddings.", "The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels.", "The scores we use are cosine-similarity scores between the mean vectors.", "In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µ f,i · µ g,j ||µ f,i || · ||µ g,j || .", "(6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings.", "Therefore, this similarity score yields the most related senses of a given word pair.", "This score reduces to a cosine similarity in the Gaussian case (K = 1).", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2 , with 50-dimensional and 300dimensional mean vectors.", "The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017) .", "For comparison, we use the public code 3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model 4 .", "We calculate Spearman's correlations for each of the word similarity datasets.", "These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil-ity.", "For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model.", "Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM.", "Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT.", "We note that the model W2GM appears quite strong according to Table 2 , beating PFT-GM on many word similarity datasets.", "However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman's correlations are more subject to noise.", "Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models.", "In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively.", "Comparison Against Multi-Prototype Models In Table 3 , we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM.", "We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses.", "We use the maximum similarity score (Equation 6), denoted as MAXSIM.", "AVESIM denotes the average of the similarity scores, rather than the maximum.", "We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information.", "Our model achieves state-of-the-art results, similar to that of Neelakantan et al.", "(2014) .", "Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages.", "We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORD-SIM353 (Finkelstein et al., 2002) for French models.", "For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016) .", "For other datasets, we train FASTTEXT models for comparison using the public code 5 on our text corpuses.", "We also train dictionary-level models W2G, and W2GM for comparison.", "Table 4 shows the Spearman's correlation results of our models.", "We outperform FASTTEXT on many word similarity benchmarks.", "Our results are also significantly better than the dictionary-based models, W2G and W2GM.", "We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents.", "We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation.", "For example, piano in Italian can mean \"floor\" or \"slow\".", "These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean \"slowly\" whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure).", "Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages.", "Qualitative Evaluation -Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words.", "Another benefit is the ability to help improve the semantics of rare words via subword sharing.", "Due to an observation that text corpuses follow Zipf's power law (Zipf, 1949) , words at the tail of the occurrence distribution appears much less frequently.", "Training these words to have a good semantic representation is challenging if done at the word level alone.", "However, an ngram such as 'abnorm' is trained during both occurrences of \"abnormal\" and \"abnormality\" in the corpus, hence further augments both words's semantics.", "Figure 3 shows the contribution of n-grams to the final representation.", "We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores.", "We observe that the final representations of both words align with n-grams \"abno\", \"bnor\", \"abnorm\", \"anbnor\", \"<abn\".", "In fact, both \"abnormal\" and \"abnormality\" share the same top-5 n-grams.", "Due to the fact that many rare words such as \"autobiographer\", \"circumnavigations\", or \"hypersensitivity\" are composed from many common sub-words, the n-gram structure can help improve the representation quality.", "Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words.", "Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Gaussians, leading to K = 2 representing a good balance between flexibility and Occam's razor.", "Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model.", "For example, the two mixture components can learn to cluster to-gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words.", "In addition, we observe that our model with K components can capture more than K meanings.", "For instance, in K = 1 model, the word pairs (\"cell\", \"jail\") and (\"cell\", \"biology\") and (\"cell\", \"phone\") will all have positive similarity scores based on K = 1 model.", "In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016) .", "However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities.", "Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words.", "The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations.", "Our models offer better semantic quality, outperforming competing models on word similarity benchmarks.", "Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words.", "Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance.", "This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018) .", "Other future work involves co-training PFT on many languages.", "Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017) , which can be suboptimal due to polysemies.", "We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.3.1", "4.3.2", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Probabilistic FastText", "Probabilistic Subword Representation", "Similarity Measure between Words", "Loss Function", "Energy Simplification", "Word Sampling", "Experiments", "Training Details", "Qualitative Evaluation -Nearest neighbors", "Word Similarity Evaluation", "Comparison Against Dictionary-Level Density Embeddings and FASTTEXT", "Comparison Against Multi-Prototype Models", "Evaluation on Foreign Language Embeddings", "Qualitative Evaluation -Subword Decomposition", "Numbers of Components", "Conclusion and Future Work" ] }
GEM-SciDuet-train-72#paper-1163#slide-16
Conclusion
Elegant representation of semantics using multimodal distributions Suitable modeling words with multiple meanings Model words as character levels Better semantics for rare words Able to estimate semantics of unseen words
Elegant representation of semantics using multimodal distributions Suitable modeling words with multiple meanings Model words as character levels Better semantics for rare words Able to estimate semantics of unseen words
[]
GEM-SciDuet-train-73#paper-1170#slide-0
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-0
What we are aiming at
Finding novel topics in news streams So far not much success in the literature
Finding novel topics in news streams So far not much success in the literature
[]
GEM-SciDuet-train-73#paper-1170#slide-1
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-1
Problem
Frequencies of (manually assigned) topic descriptors that appeared in the New York Times from June to December, 2013. Rank of Topic Descriptor
Frequencies of (manually assigned) topic descriptors that appeared in the New York Times from June to December, 2013. Rank of Topic Descriptor
[]
GEM-SciDuet-train-73#paper-1170#slide-3
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-3
SVM cannot handle a huge taxonomy Liu 2005
The number of unique topics in NYT over 6 months
The number of unique topics in NYT over 6 months
[]
GEM-SciDuet-train-73#paper-1170#slide-4
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-4
Approach
Memory Based Topic Label
Memory Based Topic Label
[]
GEM-SciDuet-train-73#paper-1170#slide-5
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-5
How it works Overview
Look up Wikipedia to find pages most relevant to a news story Generate label candidates from page titles Pick those that are deemed most fit to represent the content
Look up Wikipedia to find pages most relevant to a news story Generate label candidates from page titles Pick those that are deemed most fit to represent the content
[]
GEM-SciDuet-train-73#paper-1170#slide-7
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-7
Example
2009 detention of A merican hikers by Iran detention of hikers by Iran Making it shorter makes it more general
2009 detention of A merican hikers by Iran detention of hikers by Iran Making it shorter makes it more general
[]
GEM-SciDuet-train-73#paper-1170#slide-8
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-8
Dependency pruning
C3 C2 detentio n
C3 C2 detentio n
[]
GEM-SciDuet-train-73#paper-1170#slide-10
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-10
What you get with extension
Original approach 2009 dentetion of A merican hikers by Iran you start here 2009 dentetion o f hikers by Iran 2009 dente tion by Iran
Original approach 2009 dentetion of A merican hikers by Iran you start here 2009 dentetion o f hikers by Iran 2009 dente tion by Iran
[]
GEM-SciDuet-train-73#paper-1170#slide-11
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-11
Testing it out in the field
country media outlets #outlets #stories us/uk the new york times, yahoo, cnn, msnbc, fox, washington post, abc, south-korea joongang ilbo (English edition), chosun ilbo (English edition) japan asahi, jcast, jiji.com, mainichi, nhk, nikkei, sankei, tbs, tokyo, tv-
country media outlets #outlets #stories us/uk the new york times, yahoo, cnn, msnbc, fox, washington post, abc, south-korea joongang ilbo (English edition), chosun ilbo (English edition) japan asahi, jcast, jiji.com, mainichi, nhk, nikkei, sankei, tbs, tokyo, tv-
[]
GEM-SciDuet-train-73#paper-1170#slide-12
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-12
North Korean Agenda
North-Korea nuclear program! North-Korea relations! North-Korea Russia relations! North-Korea United-States relations! North-Korean test! North-Korean missile test! North-Korea weapons! North-Korean defectors! North-Korea South-Korea relations! North-Korean nuclear test! North-Korean famine! North-Korea program! North-Korean abductions! North-Korea weapons of mass destruction! North-Korean floods! North-Korean abductions of Japanese citizens! North-Korea women's team! Japan North-Korea relations! People's Republic North-Korea relations! North-Korean famine! rocket North-Korea! province North-Korea! North-Koreans! North-Korean abductions of Japanese citizens! First Secretary of the Workers' Party of Korea! Human rights in North-Korea! North-Korea sponsored schools in Japan! Prisons in North-Korea! North-South Summit! North Korean abductions of Japanese citizens>>Victims! Mount Kumgang>>Tourist Region! Korean Language! North-Korean Intelligence Agencies! Topic Popularity (South Korea)! Kim Jong-il's visit to China! Culture in North-Korea! North-South relations! Japan! South-Korea! News Coverage Ratio! Topic Popularity (US)!
North-Korea nuclear program! North-Korea relations! North-Korea Russia relations! North-Korea United-States relations! North-Korean test! North-Korean missile test! North-Korea weapons! North-Korean defectors! North-Korea South-Korea relations! North-Korean nuclear test! North-Korean famine! North-Korea program! North-Korean abductions! North-Korea weapons of mass destruction! North-Korean floods! North-Korean abductions of Japanese citizens! North-Korea women's team! Japan North-Korea relations! People's Republic North-Korea relations! North-Korean famine! rocket North-Korea! province North-Korea! North-Koreans! North-Korean abductions of Japanese citizens! First Secretary of the Workers' Party of Korea! Human rights in North-Korea! North-Korea sponsored schools in Japan! Prisons in North-Korea! North-South Summit! North Korean abductions of Japanese citizens>>Victims! Mount Kumgang>>Tourist Region! Korean Language! North-Korean Intelligence Agencies! Topic Popularity (South Korea)! Kim Jong-il's visit to China! Culture in North-Korea! North-South relations! Japan! South-Korea! News Coverage Ratio! Topic Popularity (US)!
[]
GEM-SciDuet-train-73#paper-1170#slide-13
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-13
Human Evaluation
Title is one of major topics in Article. Article gives a particular Part of Article deals with Title. Article makes a clear reference Part of Title has some relevance to a dominant theme of Article. Example: Title European Tax System is partially relevant to an article discussing US Tax System. Article makes a reference to part of Title. Title has no relevance to Article, in whatever way. language rating #instances english japanese
Title is one of major topics in Article. Article gives a particular Part of Article deals with Title. Article makes a clear reference Part of Title has some relevance to a dominant theme of Article. Example: Title European Tax System is partially relevant to an article discussing US Tax System. Article makes a reference to part of Title. Title has no relevance to Article, in whatever way. language rating #instances english japanese
[]
GEM-SciDuet-train-73#paper-1170#slide-14
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-14
Evaluation Metric ROUGE W
The United States of The United States of America
The United States of The United States of America
[]
GEM-SciDuet-train-73#paper-1170#slide-15
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-15
Results
Text Rank vs. WikiLabel trank rm0 rm1 rm1/x
Text Rank vs. WikiLabel trank rm0 rm1 rm1/x
[]
GEM-SciDuet-train-73#paper-1170#slide-16
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-16
Summary
Talked about topic detection using WikiLabel Generalizing concept with sentence compression Use of sentence compression led to a huge improvement, producing performance twice as good as that of TextRank Online topic learning seems promising
Talked about topic detection using WikiLabel Generalizing concept with sentence compression Use of sentence compression led to a huge improvement, producing performance twice as good as that of TextRank Online topic learning seems promising
[]
GEM-SciDuet-train-73#paper-1170#slide-17
1170
MediaMeter: A Global Monitor for Online News Coverage
This paper introduces MediaMeter, an application that works to detect and track emergent topics in the US online news media. What makes MediaMeter unique is its reliance on a labeling algorithm which we call WikiLabel, whose primary goal is to identify what news stories are about by looking up Wikipedia. We discuss some of the major news events that were successfully detected and how it compares to prior work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56 ], "paper_content_text": [ "Introduction A long term goal of this project is to build a sociologically credible computational platform that enables the user to observe how social agenda evolve and spread across the globe and across the media, as they happen.", "To this end, we have built a prototype system we call MediaMeter, which is designed to detect and track trending topics in the online US news media.", "One important feature of the system lies in its making use of and building upon a particular approach called WikiLabel (Nomoto, 2011) .", "The idea was to identify topics of a document by mapping it into a conceptual space derived from Wikipedia, which consists of finding a Wikipedia page similar to the document and taking its page title as a possible topic label.", "Further, to deal with events not known to Wikipedia, it is equipped with the capability of re-creating a page title so as to make it better fit the content of the document.", "In the following, we look at what WikiLabel does and how it works before we discuss MediaMeter.", "WikiLabel WikiLabel takes as input a document which one likes to have labeled, and outputs a ranked list of label candidates along with the confidence scores.", "The document it takes as input needs to be in the form of a vector space model (VSM).", "Now assume that θ represents a VSM of document d. Let us define l * θ , a likely topic label for d, as follows.", "l * θ = arg max l:p[l]∈U Prox(p[l], θ| N ), (1) where p[l] denotes a Wikipedia page with a title l and θ| N a VSM with its elements limited to top N terms in d (as measured by TFIDF).", "Prox (p[l] , θ| N ) is given by: Prox(p[l], θ| N ) = λSr(p[l], θ| N )+(1−λ)Lo(l, θ).", "We let: Sr(r, q) = ( 1 + N ∑ t (q(t) − r(t)) 2 ) −1 and Lo(l, v) = ∑ |l| i I(l[i], v) | l | − 1 where I(w, v) = 1 if w ∈ v and 0 otherwise.", "Sr( x, y) represents the distance between x and y, normalized to vary between 0 and 1.", "Lo(l, v) measures how many terms l and v have in common, intended to quantify the relevance of l to v. l[i] indicates i-th term in l. Note that Lo works as a penalizing term: if one finds all the terms l has in v, there will be no penalty: if not, there will be a penalty, the degree of which depends on the number of terms in l that are missing in v. U represents the entire set of pages in Wikipedia whose namespace is 0.", "We refer to an approach based on the model in Eqn.", "1 as 'WikiLabel.'", "We note that the prior work by Nomoto (2011) which the current approach builds on, is equivalent to the model in Eqn.", "1 with λ set to 1.", "One important feature of the present version, which is not shared by the previous one, is its ability to go beyond Wikipedia page titles: if it comes across a news story with a topic unknown to Wikipedia, WikiLabel will generalize a relevant page title by removing parts of it that are not warranted by the story, while making sure that its grammar stays intact.", "A principal driver of this process is sentence compression, which works to shorten a sentence or phrase, using a trellis created from a corresponding dependency structure (e.g.", "Figure 1) .", "Upon receiving possible candidates from sentence compression, WikiLabel turns to the formula in Eqn.", "1 and in particular, Lo 1 to determine a compression that best fits the document in question.", "3 North-Korean Agenda South-Korea and Japan (the number of stories we covered was 2,230 (US), 2,271 (South-Korea), and 2,815 (Japan)).", "Labels in the panels are given as they are generated by WikiLabel, except those for the Japanese media, which are translated from Japanese.", "(The horizontal axis in each panel represents the proportion of stories on a given topic.)", "Notice that there are interesting discrepancies among the countries in the way they talk about North Korea: the US tends to see DPRK as a nuclear menace while South Korea focuses on diplomatic and humanitarian issues surrounding North Korea; the Japanese media, on the other hand, depict the country as if it had nothing worth talking about except nuclear issues and its abduction of the Japanese.", "Table 2 shows how two human assessors, university graduates, rated on average, the quality of labels generated by WikiLabel for articles discussing North-Korea, on a scale of 1 (poor) to 5 (good), for English and Japanese.", "Curiously, a study on news broadcasts in South Korean and Japan (Gwangho, 2006) found that the South Korean media paid more attention to foreign relations and open-door policies of North Korea, while the Japanese media were mostly engrossed with North Korean abductions of Japanese and nuclear issues.", "In Figure 2 , which reproduces some of his findings, we recognize a familiar tendency of the Japanese media to play up nuclear issues and dismiss North Korea's external relations, which resonate with things we have found here with WikiLabel.", "MediaMeter MediaMeter 2 is a web application that draws on WikiLabel to detect trending topics in the US online news media (which includes CNN, ABC, MSNBC, BBC, Fox, Reuters, Yahoo!", "News, etc).", "It is equipped with a visualization capability based on ThemeRiver (Havre et al., 2002; Byron and Wattenberg, 2008) , enabling a simultaneous tracking of multiple topics over time.", "It performs the following routines on a daily basis: (1) collect news stories that appeared during the day; (2) generate topic labels for 600 of them chosen at random; (3) select labels whose score is 1 or above on the burstiness scale (Kleinberg, 2002) ; (4) find for each of the top ranking labels how many stories carry that label; and (5) plot the numbers using the ThemeRiver, together with the associated labels.", "Topic labels are placed automatically through integer linear programming (Christensen et al., 1995) .", "Figure 4 gives a ThemeRiver visualization of trending topics for the period from July 10 to 23, 2014.", "Figures 5 and 6 show views focusing on particular topics, with the former looking at the World Cup and the latter at Malaysia.", "The media's attention to the World Cup mushroomed on July 14th, the day when the final match took place, and fizzled out on the following day.", "Meanwhile, in Figure 6 , there is a sudden burst of stories related to Malaysia on July 17th, which coincides with the day when a Malaysian jetliner was shot down over the Ukrainian air space.", "While it is hard to tell how accurately MediaMeter reflects the reality, our feeling is that it is doing reasonably well in picking up major trends in the US news media.", "Evaluation To find where we stand in comparison to prior work, we have done some experiments, using TDT-PILOT, NYT2013, and Fox News corpora.", "TDT-PILOT refers to a corpus containing 15,863 news stories from CNN and Reuters, published between July 1, 1994 and June 30, 1995 .", "The Fox News corpus has the total of 11,014 articles, coming from the online Fox news site, which were published between January, 2015 and April, 2015.", "NYT2013 consists of articles we collected from the New York Times online between June and December, 2013, totaling 19,952.", "We measured performance in terms of how well machine generated 2 http://www.quantmedia.org/meter/demo.html labels match those by humans, based on the metric known as ROUGE-W (Lin, 2004) .", "3 ROUGE-W gives a score indicating the degree of similarity between two strings in terms of the length of a subsequence shared by both strings.", "The score ranges from 0 to 1, with 0 indicating no match and 1 a perfect match.", "In the experiment, we ran Text-Rank (TRANK) (Mihalcea and Tarau, 2004 ) -the current state of the art in topic extraction -and different renditions of WikiLabel: RM1 refers to a model in Eqn 1 with λ set to 0.5 and sentence compression turned off; RM1/X is like RM1 except that it makes use of sentence compression; RM0 is a RM1 with λ set to 1, disengaging Lo altogether.", "Table 3 gives a summary of what we found.", "Numbers in the table denote ROUGE-W scores of relevant systems, averaged over the entire articles in each dataset.", "Per-document performance@1 means that we consider labels that ranked the first when measuring performance.", "One note about FOX.", "FOX has each story labeled with multiple topic descriptors, in contrast to NYT and TDT where we have only one topic label associated with each article.", "Since there was no intrinsically correct way of choosing among descriptors that FOX provides, we paired up a label candidate with each descriptor and ran ROUGE-W on each of the pairs, taking the highest score we got as a representative of the overall performance.", "Results in Table 3 clearly corroborate the superiority of RM0 through RM1/X over TextRank.", "Conclusions In this paper, we looked at a particular approach we call WikiLabel to detecting topics in online news articles, explaining some technical details of how it works, and presented MediaMeter, which showcases WikiLabel in action.", "We also demonstrated the empirical effectiveness of the approach through experiments with NYT2013, FOX News and TDT-PILOT." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6" ], "paper_header_content": [ "Introduction", "WikiLabel", "MediaMeter", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-73#paper-1170#slide-17
Solution to Problem
Frequencies of (manually assigned) topic descriptors that appeared in the New York Times from June to December, 2013. Rank of Topic Descriptor
Frequencies of (manually assigned) topic descriptors that appeared in the New York Times from June to December, 2013. Rank of Topic Descriptor
[]
GEM-SciDuet-train-74#paper-1171#slide-0
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-0
Parallel Speech Corpora
Spoken parallel corpora are useful in building speech-to-speech applications. Costly: Laborious with respect to translation and interpretation Contain unexpressive speech (e.g. interpreted) Do not capture spontaneous spoken language traits Lack one-to-one alignment between words/sentences
Spoken parallel corpora are useful in building speech-to-speech applications. Costly: Laborious with respect to translation and interpretation Contain unexpressive speech (e.g. interpreted) Do not capture spontaneous spoken language traits Lack one-to-one alignment between words/sentences
[]
GEM-SciDuet-train-74#paper-1171#slide-1
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-1
Dubbed movies as a resource
Popular movies, documentaries, TV shows are dubbed in many countries*. A good resource for obtaining bilingual data: (1) Available parallel audio data in dubbed movies. Transcripts available with time information in subtitles. Speech! Movie still from The Man Who Knew Too Much (1956) Universal Pictures
Popular movies, documentaries, TV shows are dubbed in many countries*. A good resource for obtaining bilingual data: (1) Available parallel audio data in dubbed movies. Transcripts available with time information in subtitles. Speech! Movie still from The Man Who Knew Too Much (1956) Universal Pictures
[]
GEM-SciDuet-train-74#paper-1171#slide-2
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-2
Example Dubbing in European Countries
Image source: Wikipedia - Dubbing (filmmaking)
Image source: Wikipedia - Dubbing (filmmaking)
[]
GEM-SciDuet-train-74#paper-1171#slide-3
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-3
Proposed Method
Automatic extraction of segmented parallel sentences with prosodic parameters Input: Bilingual audio and subtitles pair Output: Aligned bilingual sentences annotated with prosodic features Supports any language pair Aligned at sentence level
Automatic extraction of segmented parallel sentences with prosodic parameters Input: Bilingual audio and subtitles pair Output: Aligned bilingual sentences annotated with prosodic features Supports any language pair Aligned at sentence level
[]
GEM-SciDuet-train-74#paper-1171#slide-5
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-5
Stage 1 Sentence Segmentation
Use subtitle time-information to find script location in audio
Use subtitle time-information to find script location in audio
[]
GEM-SciDuet-train-74#paper-1171#slide-6
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-6
Stage 2 Prosodic Parameter Extraction
ProsodyPro1 library used for prosodic feature extraction
ProsodyPro1 library used for prosodic feature extraction
[]
GEM-SciDuet-train-74#paper-1171#slide-7
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-7
Stage 3 Parallel Sentence Alignment
Goal: Given sentence s1 in lang. 1 find corresponding sentence s2 in lang.2 Yandex Translate Meteor library (Denkowski and Lavie, 2014)
Goal: Given sentence s1 in lang. 1 find corresponding sentence s2 in lang.2 Yandex Translate Meteor library (Denkowski and Lavie, 2014)
[]
GEM-SciDuet-train-74#paper-1171#slide-8
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-8
Applying the Methodology
The Man Who Knew Too Much (1956) Films originally in English, dubbed to Spanish. Audio extracted from DVD using Libav1 English and Spanish subtitles obtained from opensubtitles2.
The Man Who Knew Too Much (1956) Films originally in English, dubbed to Spanish. Audio extracted from DVD using Libav1 English and Spanish subtitles obtained from opensubtitles2.
[]
GEM-SciDuet-train-74#paper-1171#slide-10
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-10
Shortcomings
Copyright restrictions for distributing the corpus. Main bottlenecks in capturing data: 15% sentences lost in original language. 49% sentences lost in dubbed language. Processing The Man Who Knew Too Much Translation difference in dubbed audio and subtitles
Copyright restrictions for distributing the corpus. Main bottlenecks in capturing data: 15% sentences lost in original language. 49% sentences lost in dubbed language. Processing The Man Who Knew Too Much Translation difference in dubbed audio and subtitles
[]
GEM-SciDuet-train-74#paper-1171#slide-11
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-11
Sub dub differences
Extra cted English (Sub + audio) Spanish Sub Spanish Dub Daddy , you're sure I've never been to Africa before ? Papa , estas seguro de que nunca estuve antes Papa, estas seguro que no habiamos estado ya yes en Africa ? en Africa? no It looks familiar. Me parece conocido. Todo esto ya lo conozco. You saw the same scenery last summer driving to Las Vegas . Viste el mismo panorama el verano pasado Vimos un paisaje muy parecido cuando fuimos a yes cuando manejamos a Las Vegas . Las Vegas Where Daddy lost all that money at the crap Claro , donde papa perdio todo ese dinero en la Ah claro, donde papa perdio toda el dinero en la yes mesa mesa de juego? no Hey, look! Miren! Hey mirad! no A Camel. Un camello! Un camello! yes Of course this isn't really Africa, honey. Y esto no es realmente Africa . Realmente esto no es Africa, carino. yes It's the French Morocco . Es el Marruecos frances . Es el Marruecos Frances. yes Well , it's northern Africa . Es Africa del Norte. Bueno, es Africa del Norte. yes Still seems like Las Vegas . Aun se parece a Las Vegas . Pues, sigue pareciendose a Las Vegas.
Extra cted English (Sub + audio) Spanish Sub Spanish Dub Daddy , you're sure I've never been to Africa before ? Papa , estas seguro de que nunca estuve antes Papa, estas seguro que no habiamos estado ya yes en Africa ? en Africa? no It looks familiar. Me parece conocido. Todo esto ya lo conozco. You saw the same scenery last summer driving to Las Vegas . Viste el mismo panorama el verano pasado Vimos un paisaje muy parecido cuando fuimos a yes cuando manejamos a Las Vegas . Las Vegas Where Daddy lost all that money at the crap Claro , donde papa perdio todo ese dinero en la Ah claro, donde papa perdio toda el dinero en la yes mesa mesa de juego? no Hey, look! Miren! Hey mirad! no A Camel. Un camello! Un camello! yes Of course this isn't really Africa, honey. Y esto no es realmente Africa . Realmente esto no es Africa, carino. yes It's the French Morocco . Es el Marruecos frances . Es el Marruecos Frances. yes Well , it's northern Africa . Es Africa del Norte. Bueno, es Africa del Norte. yes Still seems like Las Vegas . Aun se parece a Las Vegas . Pues, sigue pareciendose a Las Vegas.
[]
GEM-SciDuet-train-74#paper-1171#slide-12
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-12
Conclusions
Automatic building of multimodal bilingual corpora from dubbed media Conversational speech Useful for speech-to-speech translation applications Works on any language pair (with trained acoustic model) No further training needed Code available at http://www.github.com/TalnUPF/movie2parallelDB
Automatic building of multimodal bilingual corpora from dubbed media Conversational speech Useful for speech-to-speech translation applications Works on any language pair (with trained acoustic model) No further training needed Code available at http://www.github.com/TalnUPF/movie2parallelDB
[]
GEM-SciDuet-train-74#paper-1171#slide-13
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-13
Future Work
Switch from proprietary audio-text aligner software to open source E.g. p2fa (based on CMU Sphinx ASR system) XML based structure as corpus metadata Instead of directory structure only Identifying the speaker of each sentence Extend and publish the corpus Depending on agreement with Copyright holders
Switch from proprietary audio-text aligner software to open source E.g. p2fa (based on CMU Sphinx ASR system) XML based structure as corpus metadata Instead of directory structure only Identifying the speaker of each sentence Extend and publish the corpus Depending on agreement with Copyright holders
[]
GEM-SciDuet-train-74#paper-1171#slide-15
1171
Automatic Extraction of Parallel Speech Corpora from Dubbed Movies
This paper presents a methodology to extract parallel speech corpora based on any language pair from dubbed movies, together with an application framework in which some corresponding prosodic parameters are extracted. The obtained parallel corpora are especially suitable for speech-to-speech translation applications when a prosody transfer between source and target languages is desired.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction The availability of large parallel corpora is one of the major challenges in developing translation systems.", "Bilingual corpora, which are needed to train statistical translation models, are harder to acquire than monolingual corpora since they presuppose the implication of labour in translation or interpretation.", "Working in the speech domain introduces even more difficulties since interpretations are not sufficient to capture the paralinguistic aspects of speech.", "Several attempts have been recently made to acquire spoken parallel corpora of considerable size.", "However, these corpora either do not reflect the prosodic aspects in the interpreted speech or do not carry the traits of natural speech.", "Or they simply do not align well the source and the target language sides.", "To account for this deficit, we propose to exploit dubbed movies where expressive speech is readily available in multiple languages and their corresponding aligned scripts are easily accessible through subtitles.", "Movies and TV shows have been a good resource for collecting parallel bilingual data because of the availability and open access of subtitles in different languages.", "With 1850 bitexts of 65 languages, the OpenSubtitles project (Lison and Tiedemann, 2016) is the largest re-source of translated movie subtitles compiled so far.", "The time information in subtitles makes it easy to align sentences of different languages since timing is correlated to the same audio (Itamar and Itai, 2008) .", "In the presence of multiple aligned audio for the same movie, the alignment can be extended to obtain parallel speech corpora.", "Popular movies, TV shows and documentaries are released with dubbed audio in many countries.", "Dubbing requires the voice acting of the original speech in another language.", "Because of this, the dubbed speech carries more or less the same paralinguistic aspects of the original speech.", "In what follows, we describe our methodology for the extraction of a speech parallel corpus based on any language pair from dubbed movies.", "Unlike Tsiartas et al.", "(2011) , who propose a method based on machine learning for automatically extracting bilingual audio-subtitle pairs from movies, we only need raw movie data, and do not require any training.", "Moreover, our methodology ensures the fulfilment of the following requirements: (a) it is easily expandable, (b) it supports multiple pairs of languages, (c) it can handle any domain and speech style, and (d) it delivers a parallel spoken language corpus with annotated expressive speech.", "\"Expressive speech\" annotation means that the corpus is prosodically rich, which is essential to be able to deal with non-neutral speech emotions, as done in increasingly popular speech-to-speech translation applications that try to cope with prosody transfer between source and target utterances (Agüero et al., 2006; Sridhar et al., 2008; Anumanchipalli et al., 2012) .", "The remainder of the paper is structured as follows.", "Section 2 reviews the main multilingual parallel speech corpora available to the research community.", "Section 3 presents the methodology used in the current paper, and Section 4 discusses the current state of the obtained parallel corpora so far.", "In Section 5, finally, some conclusions are drawn and some aspects of our future work in the context of parallel speech corpora are mentioned.", "Available Parallel Speech Corpora As already mentioned above, several attempts have been made to compile large spoken parallel corpora.", "Such corpora of considerable size are, e.g., the EPIC corpus (Bendazzoli and Sandrelli, 2005) , the EMIME Bilingual Database (Wester, 2010) , and the Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) .", "All of them have been manually compiled, and all of them show one or several shortcomings.", "The EPIC corpus, which has been compiled from speeches from the European Parliament and their interpretations, falls short in reflecting the prosodic aspects in the interpreted speech.", "The EMIME database is a compilation of prompted speeches and does not capture the natural spoken language traits.", "The MSLT corpus has been collected in bilingual conversation settings, but there is no one-to-one alignment between sentences in different languages.", "A summary of the available bilingual speech corpora is listed in Table 1 .", "Methodology Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment.", "The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level.", "The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data.", "A general overview of the system is presented in Figure 1 .", "Let us discuss each of these stages in turn.", "Segmentation of movie audio into sentences This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie.", "For subtitles, the SubRip text file format 1 (SRT) is accepted.", "Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie.", "The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed.", "These include: Speaker name markers (e.g., JAMES: .", ".", ".", "), text formatting tags, non-verbal information (laughter, horn, etc.)", "and speech dashes.", "Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end.", "Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries.", "This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.", "', '?", "', '!", "', ':', '...').", "Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not.", "If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings.", "In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences.", "We used Libav 3 library to perform the audio cuts.", "Prosodic parameter extraction This stage involves prosodic parameter extraction for each sentence segment detected in stage 1.", "The ProsodyPro library (Xu, 2013 ) (a script developed for the Praat software (Boersma and Weenink, 2001) ) is used to extract prosodic features from speech.", "As input, ProsodyPro takes the audio of Corpus Languages Speech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis.", "We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files.", "See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ).", "The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format.", "Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.", "Parallel sentence alignment This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie.", "The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1.", "For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5].", "Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold.", "For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.", "Obtained Corpus and Discussion We have tested our methodology on three movies, which we retrieved from the University Library: The Man Who Knew Too Much (1956), Slow West (2015) and The Perfect Guy (2015).", "The movies are originally in English, but also have dubbed Spanish audio.", "English and Spanish subtitles were ProsodyPro output file Description rawf0 Raw f0 contour in Hz f0 Smoothed f0 with trimming algorithm (Hz) smoothf0 Smoothed f0 with triangular window (Hz) semitonef0 f0 contour in semitones samplef0 f0 values at fixed time intervals (Hz) f0velocity First derivative of f0 means f0, intensity and velocity parameters (mean, max, min) for each word normtimef0 Constant number of f0 values for each word normtimeIntensity Constant number of intensity values for each word Table 2 : Some of the files generated by ProsodyPro.", "acquired from the opensubtitles webpage 6 .", "At the time of the submission, we have automatically extracted 2603 sentences in English and 1963 sentences in Spanish summing up to 80 and 49 minutes of audio respectively and annotated with prosodic parameters.", "1328 of these sentences were aligned to create our current parallel bilingual corpora.", "We are in the process of expanding our dataset.", "Due to the copyright on the movies, we are unable to distribute the corpus that we extracted.", "However, using our software, it is easy for any researcher to compile a corpus on their own.", "For testing purposes, English and Spanish subtitles and audio of a small portion of the movie The Man Who Knew Too Much, as well as the parallel data extracted with this methodology are made available on the github page of the project.", "Table 3 lists the number of monolingual and 6 https://www.opensubtitles.org/ parallel sentences obtained from the three movies so far.", "We observe that the number of Spanish sentences extracted in stage 2 is sometimes lower than the number of English sentences.", "This is mainly because of the translation difference between the Spanish subtitles and the dubbed Spanish audio.", "Subtitles in languages other than the original language of the movie do not always correspond with the transcript used in dubbing.", "If the audio and the text obtained from the subtitle do not match, the word aligner software performs poorly and that sentence is skipped.", "This results in fewer number of extracted sentences in dubbed languages of the movie.", "Table 4 shows more in detail the effect of this.", "Poor audio-text alignment results in loss of 15.0% of the sentences in original audio, whereas in dubbed audio this loss increases to 49.6%.", "Movie Another major effect on detection of sentences is the background noise.", "This again interferes with the performance of the word aligner software.", "But since samples with less background noise is desired for a speech database, elimination of these samples is not considered as a problem.", "Conclusions and Future Work We have presented a methodology for the extraction of multimodal speech, text and prosody parallel corpora from dubbed movies.", "Movies contain large samples of conversational speech, which makes the obtained corpus especially useful for speech-to-speech translation applications.", "It is also useful for other research fields such as large comparative linguistic and prosodic studies.", "As long as we have access to a matching pair of audio and subtitles of movies, the corpora obtained can be extended as a multilingual speech parallel corpora adaptable to any language pair.", "Moreover, it is an open-source tool and it can be adapted to any other prosodic feature extraction module in order to obtain a customized prosody parallel corpus for any specific application.", "The code to extract multilingual parallel corpora together with a processed sample movie excerpt is open source and available to use 7 under the GNU General Public License 8 .", "As future work, we plan to extend our corpus in size and make the parallel prosodic parameters available online.", "We also plan to replace the proprietary word aligner tool we are using with an open source alternative with better precision and speed." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "Available Parallel Speech Corpora", "Methodology", "Segmentation of movie audio into sentences", "Prosodic parameter extraction", "Parallel sentence alignment", "Obtained Corpus and Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-74#paper-1171#slide-15
Appendix B ProsodyPro Files
Some of the files generated by ProsodyPro
Some of the files generated by ProsodyPro
[]
GEM-SciDuet-train-75#paper-1188#slide-2
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-2
Self disclosure Definition
The verbal expressions by which a person reveals aspects of self to others [Jourard1971b] Process of making the self known to others [Jourard&Lasakow1958]
The verbal expressions by which a person reveals aspects of self to others [Jourard1971b] Process of making the self known to others [Jourard&Lasakow1958]
[]
GEM-SciDuet-train-75#paper-1188#slide-3
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-3
Self disclosure Level
General level (No disclosure) Medium level (Medium disclosure) High level (High disclosure)
General level (No disclosure) Medium level (Medium disclosure) High level (High disclosure)
[]
GEM-SciDuet-train-75#paper-1188#slide-4
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-4
Self disclosure G level
General information and ideas No information about self or someone close to him
General information and ideas No information about self or someone close to him
[]
GEM-SciDuet-train-75#paper-1188#slide-5
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-5
Self disclosure M level
General information about self or someone close to him Personal events, age, occupation and family members
General information about self or someone close to him Personal events, age, occupation and family members
[]
GEM-SciDuet-train-75#paper-1188#slide-6
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-6
Self disclosure H level
Sensitive information about self or someone close to him Problematic behaviors of self and family members Physical appearance, health, death, sexual topics
Sensitive information about self or someone close to him Problematic behaviors of self and family members Physical appearance, health, death, sexual topics
[]
GEM-SciDuet-train-75#paper-1188#slide-7
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-7
Self disclosure Relations
Degree of self-disclosure in a relationship depends on the strength of the relationship [Duck2007] Strategic self-disclosure can strengthen the relationship Can get social support from others [Derlega et al.1993] Can cope with stress [Derlega et al.1993,Tamir and Mitchell2012]
Degree of self-disclosure in a relationship depends on the strength of the relationship [Duck2007] Strategic self-disclosure can strengthen the relationship Can get social support from others [Derlega et al.1993] Can cope with stress [Derlega et al.1993,Tamir and Mitchell2012]
[]
GEM-SciDuet-train-75#paper-1188#slide-8
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-8
Limitations in Previous Works
Asking questions to participants Cons) Biased by participants memory Analyzing dataset by human Cons) Cannot apply to large dataset Experiments held in lab or artificial environment Cons) Not real/naturally occurring dataset
Asking questions to participants Cons) Biased by participants memory Analyzing dataset by human Cons) Cannot apply to large dataset Experiments held in lab or artificial environment Cons) Not real/naturally occurring dataset
[]
GEM-SciDuet-train-75#paper-1188#slide-9
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-9
Research Questions
How can we find self-disclosure in large & naturally occurring corpus automatically? What are relations between self-disclosure and social dynamics in large & naturally occurring corpus? Q1) Does high self-disclosure lead to longer conversations? Q2) Is there difference in conversation length patterns over time depending on overall self-disclosure level? Low SD level dyad
How can we find self-disclosure in large & naturally occurring corpus automatically? What are relations between self-disclosure and social dynamics in large & naturally occurring corpus? Q1) Does high self-disclosure lead to longer conversations? Q2) Is there difference in conversation length patterns over time depending on overall self-disclosure level? Low SD level dyad
[]
GEM-SciDuet-train-75#paper-1188#slide-10
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-10
Twitter Conversations
5 or more tweets At least one reply by each user
5 or more tweets At least one reply by each user
[]
GEM-SciDuet-train-75#paper-1188#slide-12
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-12
Conversation Topics
Users discuss several topics with others
Users discuss several topics with others
[]
GEM-SciDuet-train-75#paper-1188#slide-14
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-14
Challenges for SD research
Lack of ground-truth dataset of SD level No tagged dataset for Twitter conversation No accessible self-disclosure datasets Lack of study about SD in computational linguistics Definitions and examples in social psychology Related word categories in LIWC [Houghton2012]
Lack of ground-truth dataset of SD level No tagged dataset for Twitter conversation No accessible self-disclosure datasets Lack of study about SD in computational linguistics Definitions and examples in social psychology Related word categories in LIWC [Houghton2012]
[]
GEM-SciDuet-train-75#paper-1188#slide-15
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-15
Ground truth Dataset
Ask it to three judges Work on a web-based platform Screenshot of annotation web-based platform
Ask it to three judges Work on a web-based platform Screenshot of annotation web-based platform
[]
GEM-SciDuet-train-75#paper-1188#slide-16
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-16
Assumptions First person pronouns
First person pronouns are good indicators for self-disclosure Used in previous research [Joinson et al.2001, Barak et al.2007] Observed as highly discriminative features between G and M/H in annotated dataset my I love I have a I I was is going to Im I have to go to but my dad want to go was go to and I was Ive my mom going to miss
First person pronouns are good indicators for self-disclosure Used in previous research [Joinson et al.2001, Barak et al.2007] Observed as highly discriminative features between G and M/H in annotated dataset my I love I have a I I was is going to Im I have to go to but my dad want to go was go to and I was Ive my mom going to miss
[]
GEM-SciDuet-train-75#paper-1188#slide-17
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-17
Assumptions Topics
M and H level have different topics [General vs Sensitive] information about self or intimate Self-disclosure related topics by LDA [Bak2012] Location Time Adult Health Family Profanity san tonight pants teeth family nigga live time wear doctor brother lmao state tomorrow boobs dr sister shit texas good naked dentist uncle ass south ill wearing tooth cousin bitch Can be formalized as topics General information about self Ex) name, location, email address, job, Ex) physical appearance, health, sexuality, death,
M and H level have different topics [General vs Sensitive] information about self or intimate Self-disclosure related topics by LDA [Bak2012] Location Time Adult Health Family Profanity san tonight pants teeth family nigga live time wear doctor brother lmao state tomorrow boobs dr sister shit texas good naked dentist uncle ass south ill wearing tooth cousin bitch Can be formalized as topics General information about self Ex) name, location, email address, job, Ex) physical appearance, health, sexuality, death,
[]
GEM-SciDuet-train-75#paper-1188#slide-18
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-18
Self Disclosure Topic Model SDTM
Graphical model of Self-Disclosure Topic Model Classifying G and M/H level Classifying M and H level Seed words for each level Rough figure of how to infer self-disclosure in SDTM
Graphical model of Self-Disclosure Topic Model Classifying G and M/H level Classifying M and H level Seed words for each level Rough figure of how to infer self-disclosure in SDTM
[]
GEM-SciDuet-train-75#paper-1188#slide-19
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-19
Maximum Entropy Classifier
Learned from annotated dataset Works better than others (C4.5, Naive Bayes, SVM with linear kernel, polynomial kernel and radial basis) Used to identify aspect and opinions in topic model [Zhao2010]
Learned from annotated dataset Works better than others (C4.5, Naive Bayes, SVM with linear kernel, polynomial kernel and radial basis) Used to identify aspect and opinions in topic model [Zhao2010]
[]
GEM-SciDuet-train-75#paper-1188#slide-20
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-20
Seed Words
Seed words are prior knowledge for each level No seed words (symmetric prior) Data-driven approach in Twitter conversation Data-driven approach from external dataset Use Twitter conversation dataset Get frequently occurred trigram that begin with I and my Name Birthday Location Occupation My name is My birthday is I live in My job is My last name My birthday party I lived in My new job My real name My bday is I live on My high school Use external dataset (Six Billion Secrets) Users write and share his/her secrets Extract high ranked word features Example seed words Example of secret posts in Six Billion Secrets Physical appearance Health condition Death chubby addicted dead fat surgery died scar syndrome suicide acne disorder funeral
Seed words are prior knowledge for each level No seed words (symmetric prior) Data-driven approach in Twitter conversation Data-driven approach from external dataset Use Twitter conversation dataset Get frequently occurred trigram that begin with I and my Name Birthday Location Occupation My name is My birthday is I live in My job is My last name My birthday party I lived in My new job My real name My bday is I live on My high school Use external dataset (Six Billion Secrets) Users write and share his/her secrets Extract high ranked word features Example seed words Example of secret posts in Six Billion Secrets Physical appearance Health condition Death chubby addicted dead fat surgery died scar syndrome suicide acne disorder funeral
[]
GEM-SciDuet-train-75#paper-1188#slide-21
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-21
Classifying Performance
Bag of Words + Bigrams + Trigrams features Seed words and trigrams features FirstP and SEED feature
Bag of Words + Bigrams + Trigrams features Seed words and trigrams features FirstP and SEED feature
[]
GEM-SciDuet-train-75#paper-1188#slide-23
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-23
Results
High ranked topics in each level (G, M, H levels) Shown by high probability words in each topic obama league send going better ass hes win email party sick bitch romney game ill weekend feel fuck vote season sent day throat yo right team dm night cold shit president cup address dinner hope fucking Q1) Does high self-disclosure lead to longer conversations? Ans) Positive relations between initial SD level and changes CL Q2) Is there difference in CL patterns over time by overall SD level? Ans) high and mid groups increase CL over time, not low high groups talk more in a conversation than mid & low groups
High ranked topics in each level (G, M, H levels) Shown by high probability words in each topic obama league send going better ass hes win email party sick bitch romney game ill weekend feel fuck vote season sent day throat yo right team dm night cold shit president cup address dinner hope fucking Q1) Does high self-disclosure lead to longer conversations? Ans) Positive relations between initial SD level and changes CL Q2) Is there difference in CL patterns over time by overall SD level? Ans) high and mid groups increase CL over time, not low high groups talk more in a conversation than mid & low groups
[]
GEM-SciDuet-train-75#paper-1188#slide-24
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-24
Contributions
Made ground-truth Twitter conversation dataset for SD level Made first annotated Twitter conversations for SD level Share it with researchers Suggested novel method for identifying SD level (SDTM) Our assumptions are reasonable and verified by experiments SDTM performs better than others Showed relations between SD & social dynamics Strategic self-disclosure can strengthen the relationship supported by Twitter conversation dataset and SDTM
Made ground-truth Twitter conversation dataset for SD level Made first annotated Twitter conversations for SD level Share it with researchers Suggested novel method for identifying SD level (SDTM) Our assumptions are reasonable and verified by experiments SDTM performs better than others Showed relations between SD & social dynamics Strategic self-disclosure can strengthen the relationship supported by Twitter conversation dataset and SDTM
[]
GEM-SciDuet-train-75#paper-1188#slide-25
1188
Self-disclosure topic model for classifying and analyzing Twitter conversations
Self-disclosure, the act of revealing oneself to others, is an important social behavior that strengthens interpersonal relationships and increases social support. Although there are many social science studies of self-disclosure, they are based on manual coding of small datasets and questionnaires. We conduct a computational analysis of self-disclosure with a large dataset of naturally-occurring conversations, a semi-supervised machine learning algorithm, and a computational analysis of the effects of self-disclosure on subsequent conversations. We use a longitudinal dataset of 17 million tweets, all of which occurred in conversations that consist of five or more tweets directly replying to the previous tweet, and from dyads with twenty of more conversations each. We develop self-disclosure topic model (SDTM), a variant of latent Dirichlet allocation (LDA) for automatically classifying the level of self-disclosure for each tweet. We take the results of SDTM and analyze the effects of self-disclosure on subsequent conversations. Our model significantly outperforms several comparable methods on classifying the level of selfdisclosure, and the analysis of the longitudinal data using SDTM uncovers significant and positive correlation between selfdisclosure and conversation frequency and length.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction Self-disclosure is an important and pervasive social behavior.", "People disclose personal information about themselves to improve and maintain * This work was done when JinYeong Bak was a visiting student at Microsoft Research, Beijing, China.", "relationships (Jourard, 1971; Joinson and Paine, 2007) .", "A common instance of self-disclosure is the start of a conversation with an exchange of names and additional self-introductions.", "Another example of self-disclosure, shown in Figure 1c , where the information disclosed about a family member's serious illness, is much more personal than the exchange of names.", "In this paper, we seek to understand this important social behavior using a large-scale Twitter conversation data, automatically classifying the level of self-disclosure using machine learning and correlating the patterns with conversational behaviors which can serve as proxies for measuring intimacy between two conversational partners.", "Twitter conversation data, explained in more detail in section 4.1, enable an extremely large scale study of naturally-occurring self-disclosure behavior, compared to traditional social science studies.", "One challenge of such large scale study, though, remains in the lack of labeled groundtruth data of self-disclosure level.", "That is, naturally-occurring Twitter conversations do not come tagged with the level of self-disclosure in each conversation.", "To overcome that challenge, we propose a semi-supervised machine learning approach using probabilistic topic modeling.", "Our self-disclosure topic model (SDTM) assumes that self-disclosure behavior can be modeled using a combination of simple linguistic features (e.g., pronouns) with automatically discovered semantic themes (i.e., topics).", "For instance, an utterance \"I am finally through with this disastrous relationship\" uses a first-person pronoun and contains a topic about personal relationships.", "In comparison with various other models, SDTM shows the highest accuracy, and the resulting conversation frequency and length patterns on self-disclosure are shown different over time.", "Our contributions to the research community include the following: • We present key features and prior knowledge for identifying self-disclosure level, and show relevance of it with experiment results (Sec.", "2).", "• We present a topic model that explicitly includes the level of self-disclosure in a conversation using linguistic features and the latent semantic topics (Sec.", "3).", "• We collect a large dataset of Twitter conversations over three years and annotate a small subset with self-disclosure level (Sec.", "4).", "• We compare the classification accuracy of SDTM with other models and show that it performs the best (Sec.", "5).", "• We correlate the self-disclosure patterns and conversation behaviors to show that there is significant relationship over time (Sec.", "6).", "Self-Disclosure In this section, we look at social science literature for definition of the levels of self-disclosure.", "Using that definition, we devise an approach to automatically identify the levels of self-disclosure in a large corpus of OSN conversations.", "We discuss three approaches, first, using first-person pronoun features, and second, extracting seed words and phrases from the Twitter conversation corpus, and third, extracting seed words and phrases from an external corpus of anonymously posted secrets, and we demonstrate the efficacy of those approaches with an annotated corpus.", "Self-disclosure (SD) level To analyze self-disclosure, researchers categorize self-disclosure language into three levels: G (general) for no disclosure, M for medium disclosure, and H for high disclosure (Vondracek and Von dracek, 1971; Barak and Gluck-Ofri, 2007 G Level of Self-Disclosure An obvious clue of self-disclosure is the use of first-person pronouns.", "For example, phrases such as 'I live' or 'My name is' indicate that the utterance contains personal information.", "In previous research, the simple method of counting first-person pronouns was used to measure the degree of self-disclosure (Joinson, 2001; Barak and Gluck-Ofri, 2007) .", "Consequently, the absence of a first-person pronoun signals that the utterance belongs in the G level of self-disclosure.", "We verify this pattern with a dataset of Tweets annotated with G, M, and H levels.", "We divide the annotated Tweets into two classes, G and M/H.", "Then we compute mutual information of each unigram, bigram, or trigram feature to see which features are most discriminative.", "As Table 1 shows, 18 out of 30 M Level of Self-Disclosure Utterances with M level include two types: 1) information related with past events and future plans, and 2) general information about self (Barak and Gluck-Ofri, 2007) .", "For the former, we add as seed trigrams 'I have been' and 'I will'.", "For the latter, we use seven types of information generally accepted to be personally identifiable information (McCallister, 2010) , as listed in the left column of Table 2 .", "To find the appropriate trigrams for those, we take Twitter conversation data (described in Section 4.1) and look for trigrams that begin with 'I' and 'my' and occur more than 200 times.", "We then check each one to see whether it is related with any of the seven types listed in the table.", "As a result, we find 57 seed trigrams for M level.", "H Level of Self-Disclosure Utterances with H level express secretive wishes or sensitive information that exposes self or someone close (Barak and Gluck-Ofri, 2007) .", "These are generally kept as secrets.", "With this intuition, we crawled 26,523 posts from Six Billion Secrets 1 site where users post secrets anonymously 2 .", "We call this external dataset SECRET.", "Unlike G and M levels, evidence of H level of self-disclosure tends to be topical, such as physical appearance, mental and physical illnesses, and family problems, so we take an approach of fitting a topic model driven by seed words.", "A similar approach has been successful in sentiment classification (Jo and Oh, 2011; Kim et al., 2013) .", "A critical component of this approach is the set of seed words with which to drive the discovery of topics that are most indicative of H level selfdisclosure.", "To extract the seed words that express secretive personal information, we compute mutual information (Manning et al., 2008) with SE-CRET and 24,610 randomly selected tweets.", "We select 1,000 words with high mutual information and filter out stop words.", "Table 3 shows some of these words.", "To extract seed trigrams of secretive wishes, we again look for trigrams that start with 'I' or 'my', occur more than 200 times, and select trigrams of wishful thinking, such as 'I want to', and 'I wish I'.", "In total, there are 88 seed words and 8 seed trigrams for H. Since SECRET is quite different from Twitter, we must show that posts in SECRET are semantically similar to the H level Tweets.", "Rather than directly comparing SECRET posts and Tweets, we use the same method of extracting discriminative word features from the annotated H level Tweets (see Section 4.2).", "Table 3 shows the seed words extracted from SECRET as well as the annotated Tweets.", "Because the annotated dataset consists of only 200 conversations, the coverage of the topics seems narrower than the much larger SECRETS, but both datasets show similarities in the topics.", "This, combined with the results of the model with the two sets of seed words (see Section 5 for the results), shows that SECRETS is an effective and simple-to-obtain substitute for an annotated corpus of H level of self-disclosure.", "This section describes our model, the selfdisclosure topic model (SDTM), for classifying self-disclosure level and discovering topics for each self-disclosure level.", "SD level of tweet ct πc SD level proportion of conversation c θ G c ; θ M c ; θ H c Topic proportion of {G; M; H} in con- versation c φ G ; φ M ; φ H Word distribution of {G; M; H} α; γ Dirichlet prior for θ; π β G , β M ; β H Dirichlet prior for φ G ; φ M ; φ H n cl Model In section 2, we discussed different approaches to identifying each level of self-disclosure, based on social science literature, annotated and unannotated Tweets, and an external corpus of secret posts.", "In this section, we describe our self-disclosure topic model, based on the widely used latent Dirichlet allocation (Blei et al., 2003) , which incorporates those approaches.", "Figure 2 illustrates the graphical model of 1.", "For each level l ∈ {G, M, H}: For each topic k ∈ {1, .", ".", ".", ", K l }: Draw φ l k ∼ Dir(β l ) 2.", "For each conversation c ∈ {1, .", ".", ".", ", C}: (a) Draw θ G c ∼ Dir(α) (b) Draw θ M c ∼ Dir(α) (c) Draw θ H c ∼ Dir(α) (d) Draw π c ∼ Dir(γ) (e) For each message t ∈ {1, .", ".", ".", ", T }: i.", "Observe first-person pronouns features x ct ii.", "Draw ω ct ∼ M axEnt(x ct , λ) iii.", "Draw y ct ∼ Bernoulli(ω ct ) iv.", "If y ct = 0 which is G level: A.", "Draw z ct ∼ M ult(θ G c ) B.", "For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ G zct ) Else which can be M or H level: A.", "Draw r ct ∼ M ult(π c ) B.", "Draw z ct ∼ M ult(θ rct c ) C. For each word n ∈ {1, .", ".", ".", ", N }: Draw word w ctn ∼ M ult(φ rct zct ) Figure 3: Generative process of SDTM.", "SDTM and how those approaches are embodied in it.", "The first approach based on the first-person pronouns is implemented by the observed variable x ct and the parameters λ from a maximum entropy classifier for G vs. M/H level.", "The approach of seed words and phrases for levels M and H is implemented by the three separate word-topic probability vectors for the three levels of SD: φ l which has a Bayesian informative prior β l where l ∈ {G, M, H}, the three levels of self-disclosure.", "Table 4 lists the notations used in the model and the generative process, and Figure 3 describes the generative process.", "Classifying G vs M/H levels Classifying the SD level for each tweet is done in two parts, and the first part classifies G vs. M/H levels with first-person pronouns (I, my, me).", "In the graphical model, y is the latent variable that represents this classification, and ω is the distribution over y. x is the observation of the firstperson pronoun in the tweets, and λ are the parameters learned from the maximum entropy classifier.", "With the annotated Twitter conversation dataset (described in Section 4.2), we experimented with several classifiers (Decision tree, Naive Bayes) and chose the maximum entropy classifier because it performed the best, similar to other joint topic models (Zhao et al., 2010; Mukherjee et al., 2013) .", "Classifying M vs H levels The second part of the classification, the M and the H level, is driven by informative priors with seed words and seed trigrams.", "In the graphical model, r is the latent variable that represents this classification, and π is the distribution over r. γ is a non-informative prior for π, and β l is an informative prior for each SD level by seed words.", "For example, we assign a high value for the seed word 'acne' for β H , and a low value for 'My name is'.", "This approach is the same as joint models of topic and sentiment (Jo and Oh, 2011; Kim et al., 2013) .", "Inference For posterior inference of SDTM, we use collapsed Gibbs sampling which integrates out latent random variables ω, π, θ, and φ.", "Then we only need to compute y, r and z for each tweet.", "We compute full conditional distribution p(y ct = j , r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) for tweet ct as follows: p(y ct = 0, z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 0 · x ct ) 1 j=0 exp(λ j · x ct ) g(c, t, l , k ), p(y ct = 1, r ct = l , z ct = k |y −ct , r −ct , z −ct , w, x) ∝ exp(λ 1 · x ct ) 1 j=0 exp(λ j · x ct ) (γ l + n (−ct) cl ) g(c, t, l , k ), where z −ct , r −ct , y −ct are z, r, y without tweet ct, m ctk (·) is the marginalized sum over word v of m ctk v and the function g(c, t, l , k ) as follows: g(c, t, l , k ) = Γ( V v=1 β l v + n l −(ct) k v ) Γ( V v=1 β l v + n l −(ct) k v + m ctk (·) ) α k + n l (−ct) ck K k=1 α k + n l ck V v=1 Γ(β l v + n l −(ct) k v + m ctk v ) Γ(β l v + n l −(ct) k v ) .", "Data Collection and Annotation To test our self-disclosure topic model, we use a large dataset of conversations consisting of Tweets over three years such that we can analyze the relationship between self-disclosure behavior and conversation frequency and length over time.", "We chose to crawl Twitter because it offers a practical and large source of conversations (Ritter et al., 2010) .", "Others have also analyzed Twitter conversations for natural language and social media Conv's Tweets 101,686 61,451 1,956,993 17,178,638 Table 5 : Dataset of Twitter conversations.", "We chose conversations consisting of five or more tweets each.", "We chose dyads with twenty or more conversations.", "Users Dyads research (boyd et al., 2010; Danescu-Niculescu-Mizil et al., 2011) , but we collect conversations from the same set of dyads over several months for a unique longitudinal dataset.", "We also make sure that each conversation is at least five tweets, and that each dyad has at least twenty conversations.", "Collecting Twitter conversations We define a Twitter conversation as a chain of tweets where two users are consecutively replying to each other's tweets using the Twitter reply button.", "We initialize the set of users by randomly sampling thirteen users who reply to other users in English from the Twitter public streams 3 .", "Then we crawl each user's public tweets, and look at users who are mentioned in those tweets.", "It is a breadth-first search in the network defined by users as nodes and edges as conversations.", "We run this search for dyads until the depth of four, and filter out users who tweet in a non-English language.", "We use an open source tool for detecting English tweets 4 .", "To protect users' privacy, we replace Twitter userid, usernames and url in tweets with random strings.", "This dataset consists of 101,686 users, 61,451 dyads, 1,956,993 conversations and 17,178,638 tweets which were posted between August 2007 to July 2013.", "Table 5 summarizes the dataset.", "Annotating self-disclosure level To measure the accuracy of our model, we randomly sample 301 conversations, each with ten or fewer tweets, and ask three judges, fluent in English and graduate students/researchers, to annotate each tweet with the level of self-disclosure.", "Judges first read and discussed the definitions and examples of self-disclosure level shown in (Barak and Gluck-Ofri, 2007) , then they worked separately on a Web-based platform.", "As a result of annotation, there are 122 G level converstaions, 147 M level and 32 H level con- versations, and inter-rater agreement using Fleiss kappa (Fleiss, 1971 ) is 0.68, which is substantial agreement result (Landis and Koch, 1977) .", "Classification of Self-Disclosure Level This section describes experiments and results of SDTM as well as several other methods for classification of self-disclosure level.", "We first start with the annotated dataset in section 4.2 in which each tweet is annotated with SD level.", "We then aggregate all of the tweets of a conversation, and we compute the proportions of tweets in each SD level.", "When the proportion of tweets at M or H level is equal to or greater than 0.2, we take the level of the larger proportion and assign that level to the conversation.", "When the proportions of tweets at M or H level are both less than 0.2, we assign G to the SD level.", "The reason for setting 0.2 as the threshold is that a conversation containing tweets with H or M level of selfdisclosure usually starts with a greeting or a general comment, and contains one or more questions or comments before or after the self-disclosure tweet.", "We compare SDTM with the following methods for classifying conversations for SD level: • LDA (Blei et al., 2003) : A Bayesian topic model.", "Each conversation is treated as a document.", "Used in previous work (Bak et al., 2012) .", "• MedLDA (Zhu et al., 2012) : A supervised topic model for document classification.", "Each conversation is treated as a document and response variable can be mapped to a SD level.", "• LIWC (Tausczik and Pennebaker, 2010): Word counts of particular categories 5 .", "Used in previous work (Houghton and Joinson, 2012).", "• Bag of Words + Bigrams + Trigrams (BOW+): A bag of words, bigram and trigram features.", "We exclude features that appear only once or twice.", "• Seed words and trigrams (SEED): Occurrences of seed words/trigrams from SECRET which are described in section 3.3.", "• SDTM with seed words from annotated Tweets (SDTM−): To compare with SDTM below using seed words from SECRET, this uses seed words from the annotated data described in section 2.4.", "• ASUM (Jo and Oh, 2011 ): A joint model of sentiments and topics.", "We map each SD level to one sentiment and use the same seed words/trigrams from SECRET as in SDTM below.", "Used in previous work (Bak et al., 2012) .", "• First-person pronouns (FirstP): Occurrence of first-person pronouns which are described in section 3.2.", "To identify first-person pronouns, we tagged parts of speech in each tweet with the Twitter POS tagger (Owoputi et al., 2013) .", "• First-person pronouns + Seed words/trigrams (FP+SE1): First-person pronouns and seed words/trigrams from SECRET.", "• Two stage classifier with First-person pronouns + Seed words/trigrams (FP+SE2): A Method Acc G F 1 M F 1 H F Table 6 : SD level classification accuracies and Fmeasures using annotated data.", "Acc is accuracy, and G F 1 is F-measure for classifying the G level.", "Avg F 1 is the macroaveraged value of G F 1 , M F 1 and H F 1 .", "SDTM outperforms all other methods compared.", "The difference between SDTM and FirstP is statistically significant (p-value < 0.05 for accuracy, < 0.0001 for Avg F 1 ).", "two stage classifier with first-person pronouns and seed words/trigrams from SE-CRET.", "In the first stage, the classifier identifies G with first-person pronouns.", "Then in the second stage, the classifier uses seed words and trigrams to identify M and H levels.", "• SDTM: Our model with first-person pronouns and seed words/trigrams from SE-CRET.", "SEED, LIWC, LDA and FirstP cannot be used directly for classification, so we use Maximum entropy model with outputs of each of those models as features 6 .", "BOW+ uses SVM with a radial basis kernel which performs better than all other settings tried including maximum entropy.", "We split the data randomly into 80/20 for train/test.", "We run MedLDA, ASUM and SDTM 20 times each and compute the average accuracies and F-measure for each level.", "We run LDA and MedLDA with various number of topics from 80 to 140, and 120 topics shows best outputs.", "So we set 120 topics for LDA, MedLDA and ASUM, 60; 40; 40 topics for SDTM K G , K M and K H respectively which is best perform from 40; 40; 40 to 60; 60; 60 topics.", "We assume that a conversation has few topics and self-disclosure levels, so we set α = γ = 0.1 (Tang et al., 2014) .", "To incorporate the seed words and trigrams into ASUM and SDTM, we initialize β G , β M and β H differently.", "We assign a high value of 2.0 for each seed word and trigram for that level, and a low value of 10 −6 for each word that is a seed word for another level, and a default value of 0.01 for all other words.", "This approach is the same as previous papers (Jo and Oh, 2011; Kim et al., 2013) .", "As Table 6 shows, SDTM performs better than the other methods for accuracy as well as Fmeasure.", "LDA and MedLDA generally show the lowest performance, which is not surprising given these models are quite general and not tuned specifically for this type of semi-supervised classification task.", "BOW which is simple word features also does not perform well, showing especially low F-measure for the H level.", "LIWC and SEED perform better than LDA, but these have quite low F-measure for G and H levels.", "ASUM shows better performance for classifying H level than others, confirming the effectiveness of a topic modeling approach to this difficult task, but not as well as SDTM.", "FirstP shows good F-measure for the G level, but the H level F-measure is quite low, even lower than SEED.", "Combining first-person pronouns and seed words and trigrams (FP+SE1) shows better than each feature alone, and the two stage classifier (FP+SE2) which is a similar approach taken in SDTM shows better results.", "Finally, SDTM classifies G and M level at a similar accuracy with FirstP, FP+SE1 and FP+SE2, but it significantly improves accuracy for the H level compared to all other methods.", "Relations of Self-Disclosure and Conversation Behaviors In this section, we investigate whether there is a relationship between self-disclosure and conversation behaviors over time.", "Self-disclosure is one way to maintain and improve relationships (Jourard, 1971; Joinson and Paine, 2007) .", "So two people's intimacy changes over time has relationship with self-disclosure in their conversation.", "However, it is hard to identify intimacy between users in large scale online social network.", "So we choose conversation behaviors such as conversation frequency and length which can be treated as proxies for measuring intimacy between two people (Emmers- Sommer, 2004; Bak et al., 2012) .", "With SDTM, we can automatically classify the SD level of a large number of conversations, so we investigate whether there is a similar relationship between self-disclosure in conversations and subsequent conversation behaviors with the same partner on Twitter.", "For comparing conversation behaviors over time, we divided the conversations into two sets for each dyad.", "For the initial period, we include conversations from the dyad's first conversation to 20 days later.", "And for the subsequent period, we include conversations during the subsequent 10 days.", "We compute proportions of conversation for each SD level for each dyad in the initial and subsequent periods.", "More specifically, we ask the following three questions: 1.", "If a dyad shows high conversation frequency at a particular time period, would they display higher SD in their subsequent conversations?", "2.", "If a dyad displays high SD level in their conversations at a particular time period, would their subsequent conversations be longer?", "3.", "If a dyad displays high overall SD level, would their conversations increase in length over time more than dyads with lower overall SD level?", "Experiment Setup We first run SDTM with all of our Twitter conversation data with 150; 120; 120 topics for SDTM K G , K M and K H respectively.", "The hyper-parameters are the same as in section 5.", "To handle a large dataset, we employ a distributed algorithm (Newman et al., 2009) , and run with 28 threads.", "Table 7 shows some of the topics that were prominent in each SD level by KL-divergence.", "As expected, G level includes general topics such as food, celebrity, soccer and IT devices, M level includes personal communication and birthday, and finally, H level includes sickness and profanity.", "We define a new measurement, SD level score for a dyad in the period, which is a weighted sum of each conversation with SD levels mapped to 1, 2, and 3, for the levels G, M, and H, respectively.", "Figure 5 : Relationship between initial conversation frequency and subsequent SD level.", "The solid line is the linear regression line, and the coefficient is 0.0020 with p < 0.0001, which shows a significant positive relationship.", "Subsequent SD level 6.2 Does high frequency of conversation lead to more self-disclosure?", "We investigate whether the initial conversation frequency is correlated with the SD level in the subsequent period.", "We run linear regression with the initial conversation frequency as the independent variable, and SD level in the subsequent period as the dependent variable.", "The regression coefficient is 0.0020 with low pvalue (p < 0.0001).", "Figure 5 shows the scatter plot.", "We can see that the slope of the regression line is positive.", "Does high self-disclosure lead to longer conversations?", "Now we investigate the effect of the selfdisclosure level to conversation length.", "We run linear regression with the intial SD level score as the independent variable, and the rate of change in conversation length between initial period and subsequent period as the dependent variable.", "Conversation length is measured by the number of tweets in a conversation.", "The result of regression is that the independent variable's coefficient is 0.048 with a low p-value (p < 0.0001).", "Figure 6 shows the scatter plot with the regression line, and we can see that the slope of regression line is positive.", "H level 101 184 176 36 104 82 113 33 19 chocolate obama league send twitter going ass better lips butter he's win email follow party bitch sick kisses good romney game i'll tumblr weekend fuck feel love cake vote season sent tweet day yo throat smiles peanut right team dm following night shit cold softly milk president cup address account dinner fucking hope hand sugar people city know fb birthday lmao pain eyes cream good arsenal check followers Now we investigate the conversation length changes over time with three groups, low, medium, and high, by overall SD level.", "Then we investigate changes in conversation length over time.", "Figure 7 shows the results of this investigation.", "First, conversations are generally lengthier when SD level is high.", "This phenomenon is also ob- We divide dyads into three groups by SD level score as low, medium, and high.", "Conversation length noticeably increases over time in the medium and high groups, but only slight in the low group.", "served in figure 6 , but here we can see it as a long-term persistent pattern.", "Second, conversation length increases consistently and significantly for the high and medium groups, but for the low SD group, there is not a significant increase of conversation length over time.", "G level M level Related Work Prior work on quantitatively analyzing selfdisclosure has relied on user surveys (Ledbetter et al., 2011; Trepte and Reinecke, 2013) or human annotation (Barak and Gluck-Ofri, 2007; Courtney Walton and Rice, 2013) .", "These methods consume much time and effort, so they are not suitable for large-scale studies.", "In prior work closest to ours, Bak et al.", "(2012) showed that a topic model can be used to identify self-disclosure, but that work applies a two-step process in which a basic topic model is first applied to find the topics, and then the topics are post-processed for binary classification of self-disclosure.", "We improve upon this work by applying a single unified model of topics and self-disclosure for high accuracy in classifying the three levels of self-disclosure.", "Subjectivity which is aspect of expressing opinions (Pang and Lee, 2008; Wiebe et al., 2004) is related with self-disclosure, but they are different dimensions of linguistic behavior.", "Because there indeed are many high self-disclosure tweets that are subjective, but there are also counter examples in annotated dataset.", "The tweet \"England manager is Roy Hodgson.\"", "is low self-disclosure and low subjectivity, \"I have barely any hair left.\"", "is high self-disclosure but low subjectivity, and \"Senator stop lying!\"", "is low self-disclosure but high subjectivity.", "Conclusion and Future Work In this paper, we have presented the self-disclosure topic model (SDTM) for discovering topics and classifying SD levels from Twitter conversation data.", "We devised a set of effective seed words and trigrams, mined from a dataset of secrets.", "We also annotated Twitter conversations to make a ground-truth dataset for SD level.", "With annotated data, we showed that SDTM outperforms previous methods in classification accuracy and Fmeasure.", "We publish the source code of SDTM and the dataset include annotated Twitter conversations and SECRET publicly 7 .", "We also analyzed the relationship between SD level and conversation behaviors over time.", "We found that there is a positive correlation between initial SD level and subsequent conversation length.", "Also, dyads show higher level of SD if they initially display high conversation frequency.", "Finally, dyads with overall medium and high SD level will have longer conversations over time.", "These results support previous results in so-7 http://uilab.kaist.ac.kr/research/ EMNLP2014 cial psychology research with more robust results from a large-scale dataset, and show the effectiveness of computationally analyzing at SD behavior.", "There are several future directions for this research.", "First, we can improve our modeling for higher accuracy and better interpretability.", "For instance, SDTM only considers first-person pronouns and topics.", "Naturally, there are other linguistic patterns that can be identified by humans but not captured by pronouns and topics.", "Second, the number of topics for each level is varied, and so we can explore nonparametric topic models (Teh et al., 2006) which infer the number of topics from the data.", "Third, we can look at the relationship between self-disclosure behavior and general online social network usage beyond conversations.", "We will explore these directions in our future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "6.1", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Self-Disclosure", "Self-disclosure (SD) level", "G Level of Self-Disclosure", "M Level of Self-Disclosure", "H Level of Self-Disclosure", "Model", "Classifying G vs M/H levels", "Classifying M vs H levels", "Inference", "Data Collection and Annotation", "Collecting Twitter conversations", "Annotating self-disclosure level", "Classification of Self-Disclosure Level", "Relations of Self-Disclosure and Conversation Behaviors", "Experiment Setup", "Does high self-disclosure lead to longer conversations?", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-75#paper-1188#slide-25
Future Work
Self-disclosure for a user general messages Self-disclosure is related with Online social network usage [Trepte2013] We can predict users Loneliness and give a social support Usage patterns and give a feedback
Self-disclosure for a user general messages Self-disclosure is related with Online social network usage [Trepte2013] We can predict users Loneliness and give a social support Usage patterns and give a feedback
[]
GEM-SciDuet-train-76#paper-1191#slide-0
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-0
Introduction
Attention over multiple source sequences relatively unexplored. This work proposes two techniques: Applied to tasks of multimodal translation and automatic post-editing. No universal method that models explicitly the importance of each input.
Attention over multiple source sequences relatively unexplored. This work proposes two techniques: Applied to tasks of multimodal translation and automatic post-editing. No universal method that models explicitly the importance of each input.
[]
GEM-SciDuet-train-76#paper-1191#slide-1
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-1
Multi Source Sequence to Sequence Learning
Any number of input sequences with possibly different modalities. Figure 1: Multimodal translation example. Multimodal translation, automatic post-editing, multi-source machine translation, ...
Any number of input sequences with possibly different modalities. Figure 1: Multimodal translation example. Multimodal translation, automatic post-editing, multi-source machine translation, ...
[]