gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
dict | paper_headers
dict | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-47#paper-1071#slide-7
|
1071
|
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
|
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205
],
"paper_content_text": [
"Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.",
"In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.",
"In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.",
"In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.",
"The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.",
"text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.",
"We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.",
"The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.",
"However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.",
"This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.",
"Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.",
"org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.",
"Datasets can vary by domain (e.g.",
"product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.",
"Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.",
"In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .",
"In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .",
"Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.",
"This can be seen when Chen et al.",
"(2017) used the code and embeddings in Tang et al.",
"(2016b) they observe different results.",
"Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.",
"(2016a) they also produce different results to the original authors.",
"Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.",
"In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.",
"Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.",
"In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.",
"At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.",
"Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.",
"For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .",
"The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .",
"Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).",
"The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.",
"Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.",
"Fokkens et al.",
"(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.",
"In Twitter sentiment analysis, Sygkounas et al.",
"(2016) stated the need for using the same library versions and datasets when replicating work.",
"Different methods of releasing datasets and code have been suggested.",
"Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.",
"They stated a mechanism for storing results, dataset and pre-processed data 2 .",
"Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .",
"The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.",
"Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.",
"Fokkens et al.",
"(2013) showed how changes in the five key aspects affected results.",
"The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.",
"They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.",
"They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.",
"Dashtipour et al.",
"(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.",
"In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.",
"in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.",
"In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .",
"Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .",
"Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .",
"Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.",
"Mitchell et al.",
"(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .",
"Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.",
"Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.",
"Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .",
"Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.",
"However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.",
"Tang et al.",
"(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).",
"Adding attention has become very popular recently.",
"Tang et al.",
"(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.",
"negations.",
"Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.",
"Chen et al.",
"(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.",
"used neural pooling features e.g.",
"max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.",
"They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.",
"They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.",
"Other studies have adopted more linguistic approaches.",
"Wang et al.",
"(2017) extended the work of by using the dependency linked words from the target.",
"Dong et al.",
"(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.",
"(2013) but compared to Socher et al.",
"(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).",
"Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.",
"This has serious implications for generalisability of methods.",
"We correct that limitation in our study.",
"There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.",
"First, Chen et al.",
"(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.",
"They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.",
"However, the Chinese dataset was not released, and the methods were not compared across all datasets.",
"By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.",
"A second paper, by Barnes et al.",
"(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.",
"Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.",
"As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.",
"In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.",
"For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.",
"We only use a subset of the English datasets available.",
"For two reasons.",
"First, the time it takes to write parsers and run the models.",
"Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).",
"From the datasets we have used, we have only had issue with parsing Wang et al.",
"(2017) where the annotations for the first set of the data contains the target span but the second set does not.",
"Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.",
"An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.",
"As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.",
"(2014) and Mitchell et al.",
"(2013) .",
"The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.",
"(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.",
"In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .",
"This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.",
"Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.",
"It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.",
"One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.",
"(2017) .",
"As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.",
"We therefore took the approach of Wang et al.",
"(2017) and found all of the features for each appearance and performed median pooling over features.",
"This change could explain the subtle differences between the results we report and those of the original paper.",
"used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .",
"We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.",
"Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.",
"This distinction is not clearly documented in the paper or code.",
"However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.",
"We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.",
"We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.",
"We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.",
"Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.",
"The original authors tested their methods using three different word vectors: 1.",
"Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.",
"Sentiment Specific Word Embedding (SSWE) from , and 3.",
"W2V and SSWE combined.",
"Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .",
"However, the embeddings were released through Wang et al.",
"(2017) code base 9 following requesting of the code from .",
"Figure 1 shows the results of the different word embeddings across the different methods.",
"The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .",
"However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.",
"Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.",
"(2014) and show the difference between the original and reproduced models in figure 2.",
"Finally, we show the effect of scaling using Max Min and not scaling the data.",
"As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.",
"The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .",
"We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.",
"As can be seen in figure 2, not scaling can affect the results by around one-third.",
"Reproduction of Wang et al.",
"(2017) Wang et al.",
"(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.",
"Thus, they created three different methods: 1.",
"TDParseuses only the full dependency graph context, 2.",
"TDParse the feature of TDParseand the left and right contexts, and 3.",
"TDParse+ the features of TDParse and LS and RS contexts.",
"The experiments are performed on the Dong et al.",
"(2014) and Wang et al.",
"(2017) Twitter datasets where we train and test on the previously specified train and test splits.",
"We also scale our features using Max Min scaling before inputting into the SVM.",
"We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.",
"The results of these experiments can be seen in figure 3 10 .",
"As found with the results of replication, scaling is very important but is typically overlooked when reporting.",
"8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.",
"Tang et al.",
"(2016a) was the first to use LSTMs specifically for TDSA.",
"They created three different models: 1.",
"LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.",
"TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.",
"TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.",
"All of the methods outputs are fed into a softmax activation function.",
"The experiments are performed on the Dong et al.",
"(2014) dataset where we train and test on the specified splits.",
"For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.",
"With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.",
"Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .",
"As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.",
"Thus for early stopping we require to split the training data into train and validation sets to know when to stop.",
"As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.",
"As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.",
"In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.",
"Even though the mean result is quite different from the original the maximum is much closer.",
"Our results generally agree with their results on the ranking of the word vectors and the embeddings.",
"Overall, we were able to reproduce the results of all three papers.",
"However for the neural network/deep learning approach of Tang et al.",
"(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .",
"Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.",
"We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.",
"We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .",
"To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.",
"Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.",
"The results of the methods using the best found word vectors on the test sets can be seen in table 6.",
"We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.",
"We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.",
"This could be due to it being from the spoken medium compared to the rest of the datasets which are written.",
"Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.",
"Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.",
"We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.",
"Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.",
"In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.",
"While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.",
"This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.",
"The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .",
"We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.",
"In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.",
"This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.",
"Also we will explore through error analysis in which situations different neural network architectures perform best."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.1.3",
"4.2",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Datasets used in our experiments",
"Reproduction studies",
"Reproduction of Vo and Zhang (2015)",
"Scaling and Final Model comparison",
"Reproduction of Wang et al. (2017)",
"Mass Evaluation",
"Discussion and conclusion"
]
}
|
GEM-SciDuet-train-47#paper-1071#slide-7
|
Vo et al 2015 Reproduction Result
|
Scaling features is important - 15-25% difference
|
Scaling features is important - 15-25% difference
|
[] |
GEM-SciDuet-train-47#paper-1071#slide-8
|
1071
|
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
|
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205
],
"paper_content_text": [
"Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.",
"In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.",
"In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.",
"In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.",
"The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.",
"text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.",
"We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.",
"The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.",
"However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.",
"This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.",
"Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.",
"org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.",
"Datasets can vary by domain (e.g.",
"product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.",
"Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.",
"In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .",
"In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .",
"Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.",
"This can be seen when Chen et al.",
"(2017) used the code and embeddings in Tang et al.",
"(2016b) they observe different results.",
"Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.",
"(2016a) they also produce different results to the original authors.",
"Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.",
"In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.",
"Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.",
"In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.",
"At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.",
"Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.",
"For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .",
"The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .",
"Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).",
"The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.",
"Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.",
"Fokkens et al.",
"(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.",
"In Twitter sentiment analysis, Sygkounas et al.",
"(2016) stated the need for using the same library versions and datasets when replicating work.",
"Different methods of releasing datasets and code have been suggested.",
"Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.",
"They stated a mechanism for storing results, dataset and pre-processed data 2 .",
"Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .",
"The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.",
"Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.",
"Fokkens et al.",
"(2013) showed how changes in the five key aspects affected results.",
"The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.",
"They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.",
"They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.",
"Dashtipour et al.",
"(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.",
"In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.",
"in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.",
"In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .",
"Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .",
"Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .",
"Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.",
"Mitchell et al.",
"(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .",
"Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.",
"Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.",
"Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .",
"Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.",
"However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.",
"Tang et al.",
"(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).",
"Adding attention has become very popular recently.",
"Tang et al.",
"(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.",
"negations.",
"Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.",
"Chen et al.",
"(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.",
"used neural pooling features e.g.",
"max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.",
"They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.",
"They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.",
"Other studies have adopted more linguistic approaches.",
"Wang et al.",
"(2017) extended the work of by using the dependency linked words from the target.",
"Dong et al.",
"(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.",
"(2013) but compared to Socher et al.",
"(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).",
"Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.",
"This has serious implications for generalisability of methods.",
"We correct that limitation in our study.",
"There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.",
"First, Chen et al.",
"(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.",
"They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.",
"However, the Chinese dataset was not released, and the methods were not compared across all datasets.",
"By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.",
"A second paper, by Barnes et al.",
"(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.",
"Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.",
"As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.",
"In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.",
"For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.",
"We only use a subset of the English datasets available.",
"For two reasons.",
"First, the time it takes to write parsers and run the models.",
"Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).",
"From the datasets we have used, we have only had issue with parsing Wang et al.",
"(2017) where the annotations for the first set of the data contains the target span but the second set does not.",
"Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.",
"An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.",
"As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.",
"(2014) and Mitchell et al.",
"(2013) .",
"The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.",
"(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.",
"In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .",
"This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.",
"Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.",
"It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.",
"One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.",
"(2017) .",
"As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.",
"We therefore took the approach of Wang et al.",
"(2017) and found all of the features for each appearance and performed median pooling over features.",
"This change could explain the subtle differences between the results we report and those of the original paper.",
"used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .",
"We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.",
"Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.",
"This distinction is not clearly documented in the paper or code.",
"However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.",
"We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.",
"We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.",
"We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.",
"Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.",
"The original authors tested their methods using three different word vectors: 1.",
"Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.",
"Sentiment Specific Word Embedding (SSWE) from , and 3.",
"W2V and SSWE combined.",
"Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .",
"However, the embeddings were released through Wang et al.",
"(2017) code base 9 following requesting of the code from .",
"Figure 1 shows the results of the different word embeddings across the different methods.",
"The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .",
"However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.",
"Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.",
"(2014) and show the difference between the original and reproduced models in figure 2.",
"Finally, we show the effect of scaling using Max Min and not scaling the data.",
"As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.",
"The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .",
"We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.",
"As can be seen in figure 2, not scaling can affect the results by around one-third.",
"Reproduction of Wang et al.",
"(2017) Wang et al.",
"(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.",
"Thus, they created three different methods: 1.",
"TDParseuses only the full dependency graph context, 2.",
"TDParse the feature of TDParseand the left and right contexts, and 3.",
"TDParse+ the features of TDParse and LS and RS contexts.",
"The experiments are performed on the Dong et al.",
"(2014) and Wang et al.",
"(2017) Twitter datasets where we train and test on the previously specified train and test splits.",
"We also scale our features using Max Min scaling before inputting into the SVM.",
"We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.",
"The results of these experiments can be seen in figure 3 10 .",
"As found with the results of replication, scaling is very important but is typically overlooked when reporting.",
"8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.",
"Tang et al.",
"(2016a) was the first to use LSTMs specifically for TDSA.",
"They created three different models: 1.",
"LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.",
"TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.",
"TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.",
"All of the methods outputs are fed into a softmax activation function.",
"The experiments are performed on the Dong et al.",
"(2014) dataset where we train and test on the specified splits.",
"For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.",
"With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.",
"Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .",
"As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.",
"Thus for early stopping we require to split the training data into train and validation sets to know when to stop.",
"As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.",
"As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.",
"In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.",
"Even though the mean result is quite different from the original the maximum is much closer.",
"Our results generally agree with their results on the ranking of the word vectors and the embeddings.",
"Overall, we were able to reproduce the results of all three papers.",
"However for the neural network/deep learning approach of Tang et al.",
"(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .",
"Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.",
"We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.",
"We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .",
"To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.",
"Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.",
"The results of the methods using the best found word vectors on the test sets can be seen in table 6.",
"We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.",
"We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.",
"This could be due to it being from the spoken medium compared to the rest of the datasets which are written.",
"Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.",
"Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.",
"We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.",
"Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.",
"In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.",
"While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.",
"This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.",
"The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .",
"We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.",
"In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.",
"This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.",
"Also we will explore through error analysis in which situations different neural network architectures perform best."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.1.3",
"4.2",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Datasets used in our experiments",
"Reproduction studies",
"Reproduction of Vo and Zhang (2015)",
"Scaling and Final Model comparison",
"Reproduction of Wang et al. (2017)",
"Mass Evaluation",
"Discussion and conclusion"
]
}
|
GEM-SciDuet-train-47#paper-1071#slide-8
|
Tang et al 2016b Method
|
hr1 hl+1 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM h1 hl1 hl hl+1 hr2 hl+2 hr1 hr hr+1 hn
Left Context Target Context Target Context Right Context
|
hr1 hl+1 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM h1 hl1 hl hl+1 hr2 hl+2 hr1 hr hr+1 hn
Left Context Target Context Target Context Right Context
|
[] |
GEM-SciDuet-train-47#paper-1071#slide-9
|
1071
|
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
|
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205
],
"paper_content_text": [
"Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.",
"In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.",
"In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.",
"In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.",
"The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.",
"text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.",
"We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.",
"The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.",
"However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.",
"This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.",
"Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.",
"org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.",
"Datasets can vary by domain (e.g.",
"product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.",
"Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.",
"In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .",
"In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .",
"Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.",
"This can be seen when Chen et al.",
"(2017) used the code and embeddings in Tang et al.",
"(2016b) they observe different results.",
"Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.",
"(2016a) they also produce different results to the original authors.",
"Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.",
"In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.",
"Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.",
"In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.",
"At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.",
"Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.",
"For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .",
"The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .",
"Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).",
"The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.",
"Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.",
"Fokkens et al.",
"(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.",
"In Twitter sentiment analysis, Sygkounas et al.",
"(2016) stated the need for using the same library versions and datasets when replicating work.",
"Different methods of releasing datasets and code have been suggested.",
"Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.",
"They stated a mechanism for storing results, dataset and pre-processed data 2 .",
"Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .",
"The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.",
"Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.",
"Fokkens et al.",
"(2013) showed how changes in the five key aspects affected results.",
"The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.",
"They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.",
"They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.",
"Dashtipour et al.",
"(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.",
"In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.",
"in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.",
"In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .",
"Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .",
"Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .",
"Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.",
"Mitchell et al.",
"(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .",
"Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.",
"Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.",
"Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .",
"Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.",
"However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.",
"Tang et al.",
"(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).",
"Adding attention has become very popular recently.",
"Tang et al.",
"(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.",
"negations.",
"Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.",
"Chen et al.",
"(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.",
"used neural pooling features e.g.",
"max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.",
"They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.",
"They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.",
"Other studies have adopted more linguistic approaches.",
"Wang et al.",
"(2017) extended the work of by using the dependency linked words from the target.",
"Dong et al.",
"(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.",
"(2013) but compared to Socher et al.",
"(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).",
"Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.",
"This has serious implications for generalisability of methods.",
"We correct that limitation in our study.",
"There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.",
"First, Chen et al.",
"(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.",
"They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.",
"However, the Chinese dataset was not released, and the methods were not compared across all datasets.",
"By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.",
"A second paper, by Barnes et al.",
"(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.",
"Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.",
"As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.",
"In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.",
"For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.",
"We only use a subset of the English datasets available.",
"For two reasons.",
"First, the time it takes to write parsers and run the models.",
"Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).",
"From the datasets we have used, we have only had issue with parsing Wang et al.",
"(2017) where the annotations for the first set of the data contains the target span but the second set does not.",
"Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.",
"An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.",
"As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.",
"(2014) and Mitchell et al.",
"(2013) .",
"The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.",
"(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.",
"In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .",
"This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.",
"Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.",
"It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.",
"One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.",
"(2017) .",
"As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.",
"We therefore took the approach of Wang et al.",
"(2017) and found all of the features for each appearance and performed median pooling over features.",
"This change could explain the subtle differences between the results we report and those of the original paper.",
"used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .",
"We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.",
"Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.",
"This distinction is not clearly documented in the paper or code.",
"However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.",
"We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.",
"We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.",
"We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.",
"Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.",
"The original authors tested their methods using three different word vectors: 1.",
"Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.",
"Sentiment Specific Word Embedding (SSWE) from , and 3.",
"W2V and SSWE combined.",
"Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .",
"However, the embeddings were released through Wang et al.",
"(2017) code base 9 following requesting of the code from .",
"Figure 1 shows the results of the different word embeddings across the different methods.",
"The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .",
"However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.",
"Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.",
"(2014) and show the difference between the original and reproduced models in figure 2.",
"Finally, we show the effect of scaling using Max Min and not scaling the data.",
"As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.",
"The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .",
"We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.",
"As can be seen in figure 2, not scaling can affect the results by around one-third.",
"Reproduction of Wang et al.",
"(2017) Wang et al.",
"(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.",
"Thus, they created three different methods: 1.",
"TDParseuses only the full dependency graph context, 2.",
"TDParse the feature of TDParseand the left and right contexts, and 3.",
"TDParse+ the features of TDParse and LS and RS contexts.",
"The experiments are performed on the Dong et al.",
"(2014) and Wang et al.",
"(2017) Twitter datasets where we train and test on the previously specified train and test splits.",
"We also scale our features using Max Min scaling before inputting into the SVM.",
"We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.",
"The results of these experiments can be seen in figure 3 10 .",
"As found with the results of replication, scaling is very important but is typically overlooked when reporting.",
"8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.",
"Tang et al.",
"(2016a) was the first to use LSTMs specifically for TDSA.",
"They created three different models: 1.",
"LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.",
"TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.",
"TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.",
"All of the methods outputs are fed into a softmax activation function.",
"The experiments are performed on the Dong et al.",
"(2014) dataset where we train and test on the specified splits.",
"For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.",
"With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.",
"Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .",
"As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.",
"Thus for early stopping we require to split the training data into train and validation sets to know when to stop.",
"As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.",
"As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.",
"In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.",
"Even though the mean result is quite different from the original the maximum is much closer.",
"Our results generally agree with their results on the ranking of the word vectors and the embeddings.",
"Overall, we were able to reproduce the results of all three papers.",
"However for the neural network/deep learning approach of Tang et al.",
"(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .",
"Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.",
"We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.",
"We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .",
"To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.",
"Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.",
"The results of the methods using the best found word vectors on the test sets can be seen in table 6.",
"We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.",
"We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.",
"This could be due to it being from the spoken medium compared to the rest of the datasets which are written.",
"Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.",
"Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.",
"We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.",
"Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.",
"In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.",
"While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.",
"This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.",
"The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .",
"We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.",
"In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.",
"This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.",
"Also we will explore through error analysis in which situations different neural network architectures perform best."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.1.3",
"4.2",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Datasets used in our experiments",
"Reproduction studies",
"Reproduction of Vo and Zhang (2015)",
"Scaling and Final Model comparison",
"Reproduction of Wang et al. (2017)",
"Mass Evaluation",
"Discussion and conclusion"
]
}
|
GEM-SciDuet-train-47#paper-1071#slide-9
|
Tang et al 2016b Reproduction Result
|
Methods O R (Max) R (Mean)
Repeating experiments with different seed values is important.
|
Methods O R (Max) R (Mean)
Repeating experiments with different seed values is important.
|
[] |
GEM-SciDuet-train-47#paper-1071#slide-10
|
1071
|
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
|
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205
],
"paper_content_text": [
"Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.",
"In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.",
"In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.",
"In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.",
"The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.",
"text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.",
"We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.",
"The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.",
"However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.",
"This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.",
"Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.",
"org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.",
"Datasets can vary by domain (e.g.",
"product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.",
"Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.",
"In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .",
"In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .",
"Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.",
"This can be seen when Chen et al.",
"(2017) used the code and embeddings in Tang et al.",
"(2016b) they observe different results.",
"Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.",
"(2016a) they also produce different results to the original authors.",
"Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.",
"In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.",
"Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.",
"In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.",
"At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.",
"Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.",
"For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .",
"The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .",
"Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).",
"The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.",
"Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.",
"Fokkens et al.",
"(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.",
"In Twitter sentiment analysis, Sygkounas et al.",
"(2016) stated the need for using the same library versions and datasets when replicating work.",
"Different methods of releasing datasets and code have been suggested.",
"Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.",
"They stated a mechanism for storing results, dataset and pre-processed data 2 .",
"Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .",
"The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.",
"Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.",
"Fokkens et al.",
"(2013) showed how changes in the five key aspects affected results.",
"The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.",
"They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.",
"They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.",
"Dashtipour et al.",
"(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.",
"In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.",
"in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.",
"In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .",
"Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .",
"Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .",
"Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.",
"Mitchell et al.",
"(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .",
"Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.",
"Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.",
"Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .",
"Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.",
"However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.",
"Tang et al.",
"(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).",
"Adding attention has become very popular recently.",
"Tang et al.",
"(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.",
"negations.",
"Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.",
"Chen et al.",
"(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.",
"used neural pooling features e.g.",
"max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.",
"They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.",
"They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.",
"Other studies have adopted more linguistic approaches.",
"Wang et al.",
"(2017) extended the work of by using the dependency linked words from the target.",
"Dong et al.",
"(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.",
"(2013) but compared to Socher et al.",
"(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).",
"Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.",
"This has serious implications for generalisability of methods.",
"We correct that limitation in our study.",
"There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.",
"First, Chen et al.",
"(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.",
"They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.",
"However, the Chinese dataset was not released, and the methods were not compared across all datasets.",
"By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.",
"A second paper, by Barnes et al.",
"(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.",
"Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.",
"As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.",
"In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.",
"For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.",
"We only use a subset of the English datasets available.",
"For two reasons.",
"First, the time it takes to write parsers and run the models.",
"Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).",
"From the datasets we have used, we have only had issue with parsing Wang et al.",
"(2017) where the annotations for the first set of the data contains the target span but the second set does not.",
"Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.",
"An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.",
"As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.",
"(2014) and Mitchell et al.",
"(2013) .",
"The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.",
"(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.",
"In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .",
"This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.",
"Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.",
"It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.",
"One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.",
"(2017) .",
"As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.",
"We therefore took the approach of Wang et al.",
"(2017) and found all of the features for each appearance and performed median pooling over features.",
"This change could explain the subtle differences between the results we report and those of the original paper.",
"used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .",
"We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.",
"Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.",
"This distinction is not clearly documented in the paper or code.",
"However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.",
"We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.",
"We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.",
"We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.",
"Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.",
"The original authors tested their methods using three different word vectors: 1.",
"Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.",
"Sentiment Specific Word Embedding (SSWE) from , and 3.",
"W2V and SSWE combined.",
"Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .",
"However, the embeddings were released through Wang et al.",
"(2017) code base 9 following requesting of the code from .",
"Figure 1 shows the results of the different word embeddings across the different methods.",
"The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .",
"However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.",
"Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.",
"(2014) and show the difference between the original and reproduced models in figure 2.",
"Finally, we show the effect of scaling using Max Min and not scaling the data.",
"As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.",
"The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .",
"We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.",
"As can be seen in figure 2, not scaling can affect the results by around one-third.",
"Reproduction of Wang et al.",
"(2017) Wang et al.",
"(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.",
"Thus, they created three different methods: 1.",
"TDParseuses only the full dependency graph context, 2.",
"TDParse the feature of TDParseand the left and right contexts, and 3.",
"TDParse+ the features of TDParse and LS and RS contexts.",
"The experiments are performed on the Dong et al.",
"(2014) and Wang et al.",
"(2017) Twitter datasets where we train and test on the previously specified train and test splits.",
"We also scale our features using Max Min scaling before inputting into the SVM.",
"We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.",
"The results of these experiments can be seen in figure 3 10 .",
"As found with the results of replication, scaling is very important but is typically overlooked when reporting.",
"8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.",
"Tang et al.",
"(2016a) was the first to use LSTMs specifically for TDSA.",
"They created three different models: 1.",
"LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.",
"TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.",
"TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.",
"All of the methods outputs are fed into a softmax activation function.",
"The experiments are performed on the Dong et al.",
"(2014) dataset where we train and test on the specified splits.",
"For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.",
"With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.",
"Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .",
"As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.",
"Thus for early stopping we require to split the training data into train and validation sets to know when to stop.",
"As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.",
"As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.",
"In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.",
"Even though the mean result is quite different from the original the maximum is much closer.",
"Our results generally agree with their results on the ranking of the word vectors and the embeddings.",
"Overall, we were able to reproduce the results of all three papers.",
"However for the neural network/deep learning approach of Tang et al.",
"(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .",
"Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.",
"We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.",
"We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .",
"To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.",
"Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.",
"The results of the methods using the best found word vectors on the test sets can be seen in table 6.",
"We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.",
"We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.",
"This could be due to it being from the spoken medium compared to the rest of the datasets which are written.",
"Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.",
"Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.",
"We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.",
"Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.",
"In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.",
"While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.",
"This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.",
"The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .",
"We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.",
"In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.",
"This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.",
"Also we will explore through error analysis in which situations different neural network architectures perform best."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.1.3",
"4.2",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Datasets used in our experiments",
"Reproduction studies",
"Reproduction of Vo and Zhang (2015)",
"Scaling and Final Model comparison",
"Reproduction of Wang et al. (2017)",
"Mass Evaluation",
"Discussion and conclusion"
]
}
|
GEM-SciDuet-train-47#paper-1071#slide-10
|
Mass Evaluation Datasets
|
Dataset Domain Type Size Medium ATS
SemEval 14 L L RE W
SemEval 14 R R RE W
Mitchel G S W
Dong Twitter G S W
Election Twitter P S W
YouTuBean MP RE/S SP
L=Laptop, R=Restaurant, G=General, P=Politics, MP=Mobile Phones,
RE=Review, S=Social Media, W=Written, SP=Spoken, ATS=Average
|
Dataset Domain Type Size Medium ATS
SemEval 14 L L RE W
SemEval 14 R R RE W
Mitchel G S W
Dong Twitter G S W
Election Twitter P S W
YouTuBean MP RE/S SP
L=Laptop, R=Restaurant, G=General, P=Politics, MP=Mobile Phones,
RE=Review, S=Social Media, W=Written, SP=Spoken, ATS=Average
|
[] |
GEM-SciDuet-train-47#paper-1071#slide-12
|
1071
|
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
|
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205
],
"paper_content_text": [
"Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.",
"In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.",
"In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.",
"In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.",
"The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.",
"text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.",
"We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.",
"The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.",
"However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.",
"This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.",
"Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.",
"org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.",
"Datasets can vary by domain (e.g.",
"product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.",
"Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.",
"In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .",
"In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .",
"Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.",
"This can be seen when Chen et al.",
"(2017) used the code and embeddings in Tang et al.",
"(2016b) they observe different results.",
"Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.",
"(2016a) they also produce different results to the original authors.",
"Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.",
"In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.",
"Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.",
"In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.",
"At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.",
"Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.",
"For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .",
"The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .",
"Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).",
"The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.",
"Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.",
"Fokkens et al.",
"(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.",
"In Twitter sentiment analysis, Sygkounas et al.",
"(2016) stated the need for using the same library versions and datasets when replicating work.",
"Different methods of releasing datasets and code have been suggested.",
"Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.",
"They stated a mechanism for storing results, dataset and pre-processed data 2 .",
"Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .",
"The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.",
"Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.",
"Fokkens et al.",
"(2013) showed how changes in the five key aspects affected results.",
"The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.",
"They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.",
"They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.",
"Dashtipour et al.",
"(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.",
"In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.",
"in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.",
"In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .",
"Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .",
"Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .",
"Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.",
"Mitchell et al.",
"(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .",
"Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.",
"Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.",
"Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .",
"Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.",
"However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.",
"Tang et al.",
"(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).",
"Adding attention has become very popular recently.",
"Tang et al.",
"(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.",
"negations.",
"Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.",
"Chen et al.",
"(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.",
"used neural pooling features e.g.",
"max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.",
"They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.",
"They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.",
"Other studies have adopted more linguistic approaches.",
"Wang et al.",
"(2017) extended the work of by using the dependency linked words from the target.",
"Dong et al.",
"(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.",
"(2013) but compared to Socher et al.",
"(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).",
"Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.",
"This has serious implications for generalisability of methods.",
"We correct that limitation in our study.",
"There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.",
"First, Chen et al.",
"(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.",
"They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.",
"However, the Chinese dataset was not released, and the methods were not compared across all datasets.",
"By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.",
"A second paper, by Barnes et al.",
"(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.",
"Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.",
"As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.",
"In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.",
"For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.",
"We only use a subset of the English datasets available.",
"For two reasons.",
"First, the time it takes to write parsers and run the models.",
"Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).",
"From the datasets we have used, we have only had issue with parsing Wang et al.",
"(2017) where the annotations for the first set of the data contains the target span but the second set does not.",
"Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.",
"An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.",
"As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.",
"(2014) and Mitchell et al.",
"(2013) .",
"The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.",
"(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.",
"In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .",
"This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.",
"Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.",
"It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.",
"One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.",
"(2017) .",
"As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.",
"We therefore took the approach of Wang et al.",
"(2017) and found all of the features for each appearance and performed median pooling over features.",
"This change could explain the subtle differences between the results we report and those of the original paper.",
"used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .",
"We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.",
"Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.",
"This distinction is not clearly documented in the paper or code.",
"However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.",
"We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.",
"We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.",
"We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.",
"Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.",
"The original authors tested their methods using three different word vectors: 1.",
"Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.",
"Sentiment Specific Word Embedding (SSWE) from , and 3.",
"W2V and SSWE combined.",
"Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .",
"However, the embeddings were released through Wang et al.",
"(2017) code base 9 following requesting of the code from .",
"Figure 1 shows the results of the different word embeddings across the different methods.",
"The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .",
"However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.",
"Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.",
"(2014) and show the difference between the original and reproduced models in figure 2.",
"Finally, we show the effect of scaling using Max Min and not scaling the data.",
"As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.",
"The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .",
"We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.",
"As can be seen in figure 2, not scaling can affect the results by around one-third.",
"Reproduction of Wang et al.",
"(2017) Wang et al.",
"(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.",
"Thus, they created three different methods: 1.",
"TDParseuses only the full dependency graph context, 2.",
"TDParse the feature of TDParseand the left and right contexts, and 3.",
"TDParse+ the features of TDParse and LS and RS contexts.",
"The experiments are performed on the Dong et al.",
"(2014) and Wang et al.",
"(2017) Twitter datasets where we train and test on the previously specified train and test splits.",
"We also scale our features using Max Min scaling before inputting into the SVM.",
"We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.",
"The results of these experiments can be seen in figure 3 10 .",
"As found with the results of replication, scaling is very important but is typically overlooked when reporting.",
"8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.",
"Tang et al.",
"(2016a) was the first to use LSTMs specifically for TDSA.",
"They created three different models: 1.",
"LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.",
"TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.",
"TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.",
"All of the methods outputs are fed into a softmax activation function.",
"The experiments are performed on the Dong et al.",
"(2014) dataset where we train and test on the specified splits.",
"For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.",
"With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.",
"Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .",
"As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.",
"Thus for early stopping we require to split the training data into train and validation sets to know when to stop.",
"As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.",
"As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.",
"In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.",
"Even though the mean result is quite different from the original the maximum is much closer.",
"Our results generally agree with their results on the ranking of the word vectors and the embeddings.",
"Overall, we were able to reproduce the results of all three papers.",
"However for the neural network/deep learning approach of Tang et al.",
"(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .",
"Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.",
"We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.",
"We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .",
"To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.",
"Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.",
"The results of the methods using the best found word vectors on the test sets can be seen in table 6.",
"We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.",
"We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.",
"This could be due to it being from the spoken medium compared to the rest of the datasets which are written.",
"Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.",
"Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.",
"We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.",
"Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.",
"In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.",
"While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.",
"This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.",
"The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .",
"We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.",
"In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.",
"This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.",
"Also we will explore through error analysis in which situations different neural network architectures perform best."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.1.3",
"4.2",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Datasets used in our experiments",
"Reproduction studies",
"Reproduction of Vo and Zhang (2015)",
"Scaling and Final Model comparison",
"Reproduction of Wang et al. (2017)",
"Mass Evaluation",
"Discussion and conclusion"
]
}
|
GEM-SciDuet-train-47#paper-1071#slide-12
|
Contributions
|
Generalisability: First to report results across across three different
dataset properties: 1. Domain, 2. Type, 3. Medium.
Reproduction: Open source TDSA framework with three different
Code, documentation, Jupyter notebook examples, and model zoo:
|
Generalisability: First to report results across across three different
dataset properties: 1. Domain, 2. Type, 3. Medium.
Reproduction: Open source TDSA framework with three different
Code, documentation, Jupyter notebook examples, and model zoo:
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-0
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-0
|
A popular robot baymax
|
Baymax is capable of maintaining a good spoken dialogue system and learning new knowledge for better understanding and interacting with people.
Big Hero 6 -- Video content owned and licensed by Disney Entertainment, Marvel Entertainment, LLC, etc
|
Baymax is capable of maintaining a good spoken dialogue system and learning new knowledge for better understanding and interacting with people.
Big Hero 6 -- Video content owned and licensed by Disney Entertainment, Marvel Entertainment, LLC, etc
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-1
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-1
|
Spoken dialogue system sds
|
Spoken dialogue systems are the intelligent agents that are able to help users finish tasks more efficiently via speech interactions.
Spoken dialogue systems are being incorporated into various devices
(smart-phones, smart TVs, in-car navigating system, etc).
Apple s Siri Microsofts Cortana Microsofts XBOX Kinect Amazon s Echo Samsungs SMART TV Google Now
https://www.apple.com/ios/siri/ http://www.windowsphone.com/en-us/how-to/wp8/cortana/meet-cortana http://www.xbox.com/en-US/ http://www.amazon.com/oc/echo/ http://www.samsung.com/us/experience/smart-tv/ https://www.google.com/landing/now/
|
Spoken dialogue systems are the intelligent agents that are able to help users finish tasks more efficiently via speech interactions.
Spoken dialogue systems are being incorporated into various devices
(smart-phones, smart TVs, in-car navigating system, etc).
Apple s Siri Microsofts Cortana Microsofts XBOX Kinect Amazon s Echo Samsungs SMART TV Google Now
https://www.apple.com/ios/siri/ http://www.windowsphone.com/en-us/how-to/wp8/cortana/meet-cortana http://www.xbox.com/en-US/ http://www.amazon.com/oc/echo/ http://www.samsung.com/us/experience/smart-tv/ https://www.google.com/landing/now/
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-2
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-2
|
Large smart device population
|
The number of global smartphone users will surpass 2 billion in 2016.
As of 2012, there are 1.1 billion automobiles on the earth.
The more natural and convenient input of the devices evolves towards speech
|
The number of global smartphone users will surpass 2 billion in 2016.
As of 2012, there are 1.1 billion automobiles on the earth.
The more natural and convenient input of the devices evolves towards speech
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-3
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-3
|
Challenges for sds
|
An SDS in a new domain requires
A hand-crafted domain ontology
Utterances labeled with semantic representations
An SLU component for mapping utterances into semantic representations
With increasing spoken interactions, building domain ontologies and annotating utterances cost a lot so that the data does not scale up.
The goal is to enable an SDS to automatically learn this knowledge so that open domain requests can be handled.
|
An SDS in a new domain requires
A hand-crafted domain ontology
Utterances labeled with semantic representations
An SLU component for mapping utterances into semantic representations
With increasing spoken interactions, building domain ontologies and annotating utterances cost a lot so that the data does not scale up.
The goal is to enable an SDS to automatically learn this knowledge so that open domain requests can be handled.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-4
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-4
|
Interaction example
|
find an inexpensive eating place for taiwanese food
Inexpensive Taiwanese eating places include Din Tai
Fung, etc. What do you want to choose? I can help you go there.
Q: How does a dialogue system process this request? Intelligent Agent
|
find an inexpensive eating place for taiwanese food
Inexpensive Taiwanese eating places include Din Tai
Fung, etc. What do you want to choose? I can help you go there.
Q: How does a dialogue system process this request? Intelligent Agent
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-5
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-5
|
Sds process
|
find an inexpensive eating place for taiwanese food
seeking=find target=eating place price=inexpensive food=taiwanese food Intelligent Agent Organized Domain Knowledge
Ontology Induction (semantic slot)
Structure Learning (inter-slot relation) PREP_FOR seeking
SPOKEN LANGUAGE UNDERSTANDING (SLU)
SELECT restaurant { restaurant.price=inexpensive restaurant.food=taiwanese food
Intelligent Agent Inexpensive Taiwanese eating places include Din Tai Fung, Boiling Point, etc. What do you want to choose? I can help you go there. (navigation)
|
find an inexpensive eating place for taiwanese food
seeking=find target=eating place price=inexpensive food=taiwanese food Intelligent Agent Organized Domain Knowledge
Ontology Induction (semantic slot)
Structure Learning (inter-slot relation) PREP_FOR seeking
SPOKEN LANGUAGE UNDERSTANDING (SLU)
SELECT restaurant { restaurant.price=inexpensive restaurant.food=taiwanese food
Intelligent Agent Inexpensive Taiwanese eating places include Din Tai Fung, Boiling Point, etc. What do you want to choose? I can help you go there. (navigation)
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-6
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-6
|
Goals
|
find an inexpensive eating place for taiwanese food
1. Ontology Induction (semantic slot)
3. Spoken Language Understanding price food AMOD
SELECT restaurant restaurant.price=inexpensive restaurant.food=taiwanese food
4. Behavior Prediction 2. Structure Learning
Knowledge Acquisition SLU Modeling
|
find an inexpensive eating place for taiwanese food
1. Ontology Induction (semantic slot)
3. Spoken Language Understanding price food AMOD
SELECT restaurant restaurant.price=inexpensive restaurant.food=taiwanese food
4. Behavior Prediction 2. Structure Learning
Knowledge Acquisition SLU Modeling
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-7
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-7
|
Spoken language understanding
|
Output: the domain-specific semantic concepts included in each utterance
SLU Modeling by Matrix Factorization
can I have a cheap restaurant
Frame-Semantic Parsing Fw Fs Rw
Unlabeled Collection Word Relation Model Rs SLU Model
Structure Lexical KG Feature Model Knowledge Graph Propagation Model Learning
Slot Relation Model Semantic Semantic KG KG target=restaurant price=cheap
Y.-N. Chen et al., "Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding," in Proc. of ACL-IJCNLP, 2015.
|
Output: the domain-specific semantic concepts included in each utterance
SLU Modeling by Matrix Factorization
can I have a cheap restaurant
Frame-Semantic Parsing Fw Fs Rw
Unlabeled Collection Word Relation Model Rs SLU Model
Structure Lexical KG Feature Model Knowledge Graph Propagation Model Learning
Slot Relation Model Semantic Semantic KG KG target=restaurant price=cheap
Y.-N. Chen et al., "Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding," in Proc. of ACL-IJCNLP, 2015.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-8
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-8
|
Probabilistic frame semantic parsing
|
a linguistically semantic resource, based on the frame-semantics theory
words/phrases can be represented as frames
low fat milk milk evokes the food frame;
low fat fills the descriptor frame element
a state-of-the-art frame-semantics parser, trained on manually annotated FrameNet sentences
Baker et al., "The berkeley framenet project," in Proc. of International Conference on Computational linguistics, 1998. Das et al., " Frame-semantic parsing," in Proc. of Computational Linguistics, 2014.
|
a linguistically semantic resource, based on the frame-semantics theory
words/phrases can be represented as frames
low fat milk milk evokes the food frame;
low fat fills the descriptor frame element
a state-of-the-art frame-semantics parser, trained on manually annotated FrameNet sentences
Baker et al., "The berkeley framenet project," in Proc. of International Conference on Computational linguistics, 1998. Das et al., " Frame-semantic parsing," in Proc. of Computational Linguistics, 2014.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-9
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-9
|
Frame semantic parsing for utterances
|
can i have a cheap restaurant Good!
FT LU: can FE LU: i
FT LU: cheap Frame: locale by use
FT: Frame Target; FE: Frame Element; LU: Lexical Unit
1st Issue: adapting generic frames to domain-specific settings for SDSs
|
can i have a cheap restaurant Good!
FT LU: can FE LU: i
FT LU: cheap Frame: locale by use
FT: Frame Target; FE: Frame Element; LU: Lexical Unit
1st Issue: adapting generic frames to domain-specific settings for SDSs
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-10
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-10
|
Knowledge graph propogation model
|
1ST ISSUE: HOW TO ADAPT GENERIC SLOTS TO DOMAIN-SPECIFIC SETTING?
Assumption: The domain-specific words/slots have more dependency to each other.
Word Observation Slot Candidate i like cheap food restaurant expensiveness food locale_by_use capability Utterance 1 i would like a cheap restaurant Train
Utterance 2 find a restaurant with chinese food
Test Utterance show me a list of cheap restaurants Test
slot relation matrix Word Relation Model Slot Induction Slot Relation Model
Relation matrices allow each node to propagate scores to its neighbors in the knowledge graph, so that domain-specific words/slots have higher scores after matrix multiplication.
|
1ST ISSUE: HOW TO ADAPT GENERIC SLOTS TO DOMAIN-SPECIFIC SETTING?
Assumption: The domain-specific words/slots have more dependency to each other.
Word Observation Slot Candidate i like cheap food restaurant expensiveness food locale_by_use capability Utterance 1 i would like a cheap restaurant Train
Utterance 2 find a restaurant with chinese food
Test Utterance show me a list of cheap restaurants Test
slot relation matrix Word Relation Model Slot Induction Slot Relation Model
Relation matrices allow each node to propagate scores to its neighbors in the knowledge graph, so that domain-specific words/slots have higher scores after matrix multiplication.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-11
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-11
|
Knowledge graph construction
|
Syntactic dependency parsing on utterances
nsubj dobj det amod
can i have a cheap restaurant
Slot-based semantic knowledge graph capability s locale_by_use expensiveness
can w Word-based lexical knowledge graph
The edge between a node pair is weighted as relation importance to propagate the scores via a relation matrix
How to decide the weights to represent relation importance?
|
Syntactic dependency parsing on utterances
nsubj dobj det amod
can i have a cheap restaurant
Slot-based semantic knowledge graph capability s locale_by_use expensiveness
can w Word-based lexical knowledge graph
The edge between a node pair is weighted as relation importance to propagate the scores via a relation matrix
How to decide the weights to represent relation importance?
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-12
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-12
|
Weight measurement by embeddings
|
can = have =
nsubj dobj det amod
can i have a cheap restaurant
capability have a expensiveness locale_by_use
Levy and Goldberg, " Dependency-Based Word Embeddings," in Proc. of ACL, 2014.
Compute edge weights to represent relation importance
Slot-to-slot semantic relation : similarity between slot embeddings
Slot-to-slot dependency relation : dependency score between slot embeddings
Word-to-word semantic relation : similarity between word embeddings
: dependency score between word Word-to-word dependency relation embeddings
Y.-N. Chen et al., Jointly Modeling Inter-Slot Relations by Random Walk on Knowledge Graphs for Unsupervised Spoken Language Understanding," in Proc. of NAACL, 2015.
|
can = have =
nsubj dobj det amod
can i have a cheap restaurant
capability have a expensiveness locale_by_use
Levy and Goldberg, " Dependency-Based Word Embeddings," in Proc. of ACL, 2014.
Compute edge weights to represent relation importance
Slot-to-slot semantic relation : similarity between slot embeddings
Slot-to-slot dependency relation : dependency score between slot embeddings
Word-to-word semantic relation : similarity between word embeddings
: dependency score between word Word-to-word dependency relation embeddings
Y.-N. Chen et al., Jointly Modeling Inter-Slot Relations by Random Walk on Knowledge Graphs for Unsupervised Spoken Language Understanding," in Proc. of NAACL, 2015.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-13
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-13
|
Knowledge graph propagation model
|
Word Observation Slot Candidate
cheap food restaurant expensiveness food locale_by_use
Word Relation Model Slot Induction Slot Relation Model
|
Word Observation Slot Candidate
cheap food restaurant expensiveness food locale_by_use
Word Relation Model Slot Induction Slot Relation Model
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-14
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-14
|
Feature model
|
Word Observation Slot Candidate cheap food restaurant Utterance 1 expensiveness food locale_by_use
i would like a cheap restaurant
find a restaurant with chinese food
Test Utterance hidden semantics
show me a list of cheap restaurants Test
Slot Induction 2nd Issue: unobserved hidden semantics may benefit understanding
|
Word Observation Slot Candidate cheap food restaurant Utterance 1 expensiveness food locale_by_use
i would like a cheap restaurant
find a restaurant with chinese food
Test Utterance hidden semantics
show me a list of cheap restaurants Test
Slot Induction 2nd Issue: unobserved hidden semantics may benefit understanding
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-15
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-15
|
Matrix factorization mf
|
2ND ISSUE: HOW TO LEARN IMPLICIT SEMANTICS?
cheap food restaurant expensiveness food locale_by_use
Word Relation Model Slot Induction Slot Relation Model
Reasoning with Matrix Factorization
MF method completes a partially-missing matrix based on a low-rank latent semantics assumption.
The decomposed matrices represent low-rank latent semantics for utterances and words/slots respectively
The product of two matrices fills the probability of hidden semantics
Word Observation Slot Candidate
|
2ND ISSUE: HOW TO LEARN IMPLICIT SEMANTICS?
cheap food restaurant expensiveness food locale_by_use
Word Relation Model Slot Induction Slot Relation Model
Reasoning with Matrix Factorization
MF method completes a partially-missing matrix based on a low-rank latent semantics assumption.
The decomposed matrices represent low-rank latent semantics for utterances and words/slots respectively
The product of two matrices fills the probability of hidden semantics
Word Observation Slot Candidate
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-16
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-16
|
Bayesian personalized ranking for mf
|
not treat unobserved facts as negative samples (true or false)
give observed facts higher scores than unobserved facts
The objective is to learn a set of well-ranked semantic slots per utterance.
|
not treat unobserved facts as negative samples (true or false)
give observed facts higher scores than unobserved facts
The objective is to learn a set of well-ranked semantic slots per utterance.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-17
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-17
|
Experimental setup
|
Cambridge University SLU corpus [Henderson, 2012]
Restaurant recommendation in an in-car setting in Cambridge
dialogue slot: addr, area, food, name,
phone, postcode, price range, The mapping table between induced and reference slots task, type
Henderson et al., "Discriminative spoken language understanding using word confusion networks," in Proc. of SLT, 2012.
|
Cambridge University SLU corpus [Henderson, 2012]
Restaurant recommendation in an in-car setting in Cambridge
dialogue slot: addr, area, food, name,
phone, postcode, price range, The mapping table between induced and reference slots task, type
Henderson et al., "Discriminative spoken language understanding using word confusion networks," in Proc. of SLT, 2012.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-18
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-18
|
Experiment 1 quality of semantics estimation
|
Metric: Mean Average Precision (MAP) of all estimated slot probabilities for each utterance
ASR Manual Approach w/o w/ Explicit w/o w/ Explicit
Support Vector Machine Explicit Multinomial Logistic Regression
Random Modeling Implicit Semantics
MF Feature Model + Knowledge Graph Propagation
The MF approach effectively models hidden semantics to improve SLU.
Adding a knowledge graph propagation model further improves performance.
|
Metric: Mean Average Precision (MAP) of all estimated slot probabilities for each utterance
ASR Manual Approach w/o w/ Explicit w/o w/ Explicit
Support Vector Machine Explicit Multinomial Logistic Regression
Random Modeling Implicit Semantics
MF Feature Model + Knowledge Graph Propagation
The MF approach effectively models hidden semantics to improve SLU.
Adding a knowledge graph propagation model further improves performance.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-19
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-19
|
Experiment 2 effectiveness of relations
|
All types of relations are useful to infer hidden semantics.
Combining different relations further improves the performance.
|
All types of relations are useful to infer hidden semantics.
Combining different relations further improves the performance.
|
[] |
GEM-SciDuet-train-48#paper-1077#slide-20
|
1077
|
Unsupervised Learning and Modeling of Knowledge and Intent for Spoken Dialogue Systems
|
Spoken dialogue systems (SDS) are rapidly appearing in various smart devices (smartphone, smart-TV, in-car navigating system, etc). The key role in a successful SDS is a spoken language understanding (SLU) component, which parses user utterances into semantic concepts in order to understand users' intentions. However, such semantic concepts and their structure are manually created by experts, and the annotation process results in extremely high cost and poor scalability in system development. Therefore, the dissertation focuses on improving SDS generalization and scalability by automatically inferring domain knowledge and learning structures from unlabeled conversations through a matrix factorization (MF) technique. With the automatically acquired semantic concepts and structures, we further investigate whether such information can be utilized to effectively understand user utterances and then show the feasibility of reducing human effort during SDS development.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Various smart devices (e.g.",
"smartphone, smart-TV, in-car navigating system) are incorporating spoken language interfaces, a.k.a.",
"spoken dialogue systems (SDS), in order to help users finish tasks more efficiently.",
"The key role in a successful SDS is a spoken language understanding (SLU) component; in order to capture the language variation from dialogue participants, the SLU component must create a mapping between the natural language inputs and semantic representations that correspond to users' intentions.",
"The semantic representation must include \"concepts' and a \"structure\": concepts are the domain-specific topics, and the structure describes the relations between concepts and conveys intentions.",
"However, most prior work focused on learning the mapping between utterances and semantic representations, where such knowledge still remains predefined.",
"The need of annotations results in extremely high cost and poor scalability in system development.",
"Therefore, current technology usually limits conversational interactions to a few narrow predefined domains/topics.",
"With the increasing conversational interactions, this dissertation focuses on improving generalization and scalability of building SDSs with little human effort.",
"In order to achieve the goal, two questions need to be addressed: 1) Given unlabelled conversations, how can a system automatically induce and organize the domain-specific concepts?",
"2) With the automatically acquired knowledge, how can a system understand user utterances and intents?",
"To tackle the above problems, we propose to acquire the domain knowledge that captures human's semantics, intents, and behaviors.",
"Then based on the acquired knowledge, we build an SLU component to understand users and to offer better interactions in dialogues.",
"The dissertation shows the feasibility of building a dialogue learning system that is able to understand how particular domains work based on unlabeled conversations.",
"As a result, an initial SDS can be automatically built according to the learned knowledge, and its performance can be quickly improved by interacting with users for practical usage, presenting the potential of reducing human effort for SDS development.",
"Our MF method completes a partially-missing matrix for semantic decoding/behavior prediction.",
"Dark circles are observed facts, shaded circles are inferred facts.",
"The ontology induction maps observed feature patterns to semantic concepts.",
"The feature relation model constructs correlations between observed feature patterns.",
"The concept relation model learns the high-level semantic correlations for inferring hidden semantic slots or predicting subsequent behaviors.",
"Reasoning with matrix factorization incorporates these models jointly, and produces a coherent and domain-specific SLU model.",
"the intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning.",
"Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-Tür et al., 2013; Chen et al., 2014a) , entity extraction (Wang et al., 2014) , and extending domain coverage (El-Kahky et al., 2014; Chen and Rudnicky, 2014) .",
"However, most studies above do not explicitly learn latent factor representations from the data-while we hypothesize that the better robustness can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account.",
"Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997) .",
"Recently, were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model.",
"In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors.",
"More recently, used a semi-supervised LDA model to show improvement on the slot filling task.",
"Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results.",
"However, for unsupervised SLU, it is not obvious how to incorporate additional information in the HMMs.",
"With increasing works about learn-ing the feature matrices for language representations (Mikolov et al., 2013) , matrix factorization (MF) has become very popular for both implicit and explicit feedback (Rendle et al., 2009; Chen et al., 2015a) .",
"This thesis proposal is the first to propose a framework about unsupervised SLU modeling, which is able to simultaneously consider various local and global knowledge automatically learned from unlabelled data using a matrix factorization (MF) technique.",
"The Proposed Work The proposed framework is shown in Figure 1(a) , where there are two main parts, one is knowledge acquisition and another is SLU modeling by MF.",
"The first part is to acquire the domain knowledge that is useful for building the domain-specific dialogue systems, which addresses the question about how to induce and organize the semantic concepts (the first problem).",
"Here we propose ontology induction and structure learning procedures.",
"The ontology induction refers to the semantic concept induction (yellow block) and the structure learning refers to relation models (blue and pink blocks) in Figure 1 (a).",
"The details are described in Section 4.",
"The second part is to self-train an SLU component using the acquired knowledge for the domainspecific SDS, and this part answers to the question about how to utilize the obtained information in SDSs to understand user utterances and intents.",
"There are two aspects regarding to SLU modeling, semantic decoding and behavior prediction.",
"The semantic decoding is to parse the input utterances into semantic forms for better understanding, and the behavior prediction is to predict the subsequent user behaviors for providing better system interactions.",
"This dissertation plans to apply MF techniques to unsupervised SLU modeling, including both semantic decoding and behavior prediction.",
"In the proposed model, we first build a feature matrix to represent training utterances, where each row refers to an utterance and each column refers to an observed feature pattern or a learned semantic concept (either a slot or a behavior).",
"terance implies the meaning facet food.",
"The MF approach is able to learn the latent feature vectors for utterances and semantic concepts, inferring implicit semantics to improve the decoding process-namely, by filling the matrix with probabilities (lower part of the matrix in Figure 1(b) ).",
"The feature model is built on the observed feature patterns and the learned concepts, where the concepts are obtained from the knowledge acquisition process Chen et al., 2015b) .",
"Section 5.1 explains the detail of the feature model.",
"In order to consider the additional structure information, we propose a relation propagation model based on the learned structure, which includes a feature relation model (blue block) and a concept relation model (pink block) described in Section 5.2.",
"Finally we train an SLU model by learning latent feature vectors for utterances and slots/behaviors through MF techniques.",
"Combining with a relation propagation model, the trained SLU model is able to estimate the probability that each concept occurs in the testing utterance, and how likely each concept is domain-specific simultaneously.",
"In other words, the SLU model is able to transform testing utterances into domainspecific semantic representations or predicted behaviors without human involvement.",
"Knowledge Acquisition Given unlabeled conversations and available knowledge resources, we plan to extract organized knowledge that can be used for domain-specific SDSs.",
"The ontology induction and structure learning are proposed to automate an ontology building process.",
"2014b) proposed to automatically induce semantic slots for SDSs by framesemantic parsing, where all ASR-decoded utter- ances are parsed using SEMAFOR 1 , a state-ofthe-art frame-semantic parser (Das et al., 2010; Das et al., 2013) , and then all frames from parsed results are extracted as slot candidates (Dinarelli et al., 2009) .",
"For example, Figure 2 shows an example of an ASR-decoded text output parsed by SEMAFOR.",
"There are three frames (capability, expensiveness, and locale by use) in the utterance, which we consider as slot candidates.",
"Ontology Induction Since SEMAFOR was trained on FrameNet annotation, which has a more generic framesemantic context, not all the frames from the parsing results can be used as the actual slots in the domain-specific dialogue systems.",
"For instance, in Figure 2 , \"expensiveness\" and \"locale by use\" frames are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the \"capability\" frame does not convey particularly valuable information for the domain-specific SDS.",
"In order to fix this issue, Chen et al.",
"(2014b) proved that integrating continuous-valued word embeddings with a probabilistic frame-semantic parser is able to identify key semantic slots in an unsupervised fashion, reducing the cost of designing task-oriented SDSs.",
"Structure Learning A key challenge of designing a coherent semantic ontology for SLU is to consider the structure and relations between semantic concepts.",
"In practice, however, it is difficult for domain experts and professional annotators to define a coherent slot set, while considering various lexical, syntactic, and semantic dependencies.",
"The previous work exploited the typed syntactic dependency theory for unsupervised induction and organization of semantic slots in SDSs (Chen et al., 2015b) .",
"More specifically, two knowledge graphs, a slot-based semantic knowledge graph and a word-based lexical knowledge graph, are automatically constructed.",
"To jointly consider the word-to-word, word-to-slot, and slot-to-slot relations, we use a random walk inference algorithm to combine these two knowledge graphs, guided by dependency grammars.",
"Figure 3 is a simplified example of the automatically built semantic knowledge graph corresponding to the restaurant domain.",
"The experiments showed that considering inter-slot relations is crucial for generating a more coherent and compete slot set, resulting in a better SLU model, while enhancing the interpretability of semantic slots.",
"SLU Modeling by Matrix Factorization For two aspects of SLU modeling: semantic decoding and behavior prediction, we plan to apply MF to both tasks by treating learned concepts as semantic slots and human behaviors respectively.",
"Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden information, and 3) modeling the dependency between observations, the dissertation applies an MF approach to SLU modeling for SDSs.",
"In our model, we use U to denote the set of input utterances, F as the set of observed feature patterns, and S as the set of semantic concepts we would like to predict (slots or human behaviors).",
"The pair of an utterance u ∈ U and a feature/concept x ∈ {F +S}, u, x , is a fact.",
"The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by { u, x ∈ O}.",
"The goal of our model is to estimate, for a given utterance u and a given feature pattern/concept x, the probability, p(M u,x = 1), where M u,x is a binary random variable that is true if and only if x is the feature pattern/domainspecific concept in the utterance u.",
"We introduce a series of exponential family models that estimate the probability using a natural parameter θ u,x and the logistic sigmoid function: p(M u,x = 1 | θ u,x ) = σ(θ u,x ) (1) = 1 1 + exp (−θ u,x ) .",
"We construct a matrix M |U |×(|F |+|S|) as observed facts for MF by integrating a feature model and a relation propagation model below.",
"Feature Model First, we build a binary feature pattern matrix F f based on the observations, where each row refers to an utterance and each column refers to a feature pattern (a word or a phrase).",
"In other words, F f carries the basic word/phrase vector for each utterance, which is illustrated as the left part of the matrix in Figure 1(b) .",
"Then we build a binary matrix F s based on the induced semantic concepts from Section 4.1, which also denotes the slot/behavior features for all utterances (right part of the matrix in Figure 1(b) ).",
"For building the feature model M F , we concatenate two matrices and obtain M F = [ F f F s ], which refers to the upper part of the matrix in Figure 1(b) for training utterances.",
"Relation Propagation Model It is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the SLU performance (Chen et al., 2015b) .",
"With the learned structure from Section 4.2, we can model the relations between semantic concepts, such as inter-slot and inter-behavior relations.",
"Also, the relations between feature patterns can be modeled in the similar way.",
"We construct two knowledge graphs to model the structure: • Feature knowledge graph is built as G f = V f , E f f , where V f = {f i ∈ F } and E f f = {e ij | f i , f j ∈ V f }.",
"• Semantic concept knowledge graph is built as G s = V s , E ss , where V s = {s i ∈ S} and E ss = {e ij | s i , s j ∈ V s }.",
"The structured graph can model the relation between the connected node pair (x i , x j ) as r(x i , x j ).",
"Here we compute two matrices R s = [r(s i , s j )] |S|×|S| and R f = [r(f i , f j )] |F |×|F | to represent concept relations and feature relations respectively.",
"With the built relation models, we combine them as a relation propagation matrix M R 2 : M R = R f 0 0 R s .",
"(2) The goal of this matrix is to propagate scores between nodes according to different types of relations in the constructed knowledge graphs (Chen and Metze, 2012) .",
"Integrated Model With a feature model M F and a relation propagation model M R , we integrate them into a single matrix.",
"M = M F · (M R + I) (3) = F f R f + F f 0 0 F s R s + F s , where M is final matrix and I is the identity matrix in order to remain the original values.",
"The matrix M is similar to M F , but some weights are enhanced through the relation propagation model.",
"The feature relations are built by F f R f , which is the matrix with internal weight propagation on the feature knowledge graph (the blue arrow in Figure 1(b) ).",
"Similarly, F s R s models the semantic concept correlations, and can be treated as the matrix with internal weight propagation on the semantic concept knowledge graph (the pink arrow in Figure 1(b) ).",
"The propagation model can be treated as running a random walk algorithm on the graphs.",
"By integrating with the relation propagation model, the relations can be propagated via the knowledge graphs, and the hidden information may be modeled based on the assumption that mutual relations usually help inference (Chen et al., 2015b) .",
"Hence, the structure information can be automatically involved in the matrix.",
"In conclusion, for each utterance, the integrated model not only predicts the probabilities that semantic concepts occur but also considers whether they are domain-specific.",
"Model Learning The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log likelihood of observed data (Collins et al., 2001) .",
"θ * = arg max θ u∈U p(θ | M u ) (4) = arg max θ u∈U p(M u | θ)p(θ) = arg max θ u∈U ln p(M u | θ) − λ θ , where M u is the vector corresponding to the utterance u from M u,x in (1), because we assume that each utterance is independent of others.",
"To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback.",
"Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009 ).",
"Riedel et al.",
"(2013) also showed that BPR learns the implicit relations and improves a relation extraction task.",
"To estimate the parameters in (4), we create a dataset of ranked pairs from M in (3): for each utterance u and each observed fact f + = u, x + , where M u,x ≥ δ, we choose each semantic concept x − such that f − = u, x − , where M u,x < δ, which refers to the semantic concept we have not observed in utterance u.",
"That is, we construct the observed data O from M .",
"Then for each pair of facts f + and f − , we want to model p(f + ) > p(f − ) and hence θ f + > θ f − according to (1).",
"BPR maximizes the summation of each ranked pair, where the objective is u∈U ln p(M u | θ) = f + ∈O f − ∈O ln σ(θ f + − θ f − ).",
"(5) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve -well-ranked semantic concepts per utterance, which denotes the better estimation of semantic slots or human behaviors.",
"To maximize the objective in (5), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009) .",
"For each randomly sampled observed fact u, x + , we sample an unobserved fact u, x − , which results in |O| fact pairs f − , f + .",
"For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011) .",
"Conclusion and Future Work This thesis proposal proposes an unsupervised SLU approach by automating the dialogue learning process on speech conversations.",
"The preliminary results show that for the automatic speech recognition (ASR) transcripts (word error rate is about 37%), the acquired knowledge can be successfully applied to SLU modeling through MF techniques, guiding the direction of the methodology.",
"The main planed tasks include: • Semantic concept identification • Semantic concept annotation • SLU modeling by matrix factorization In this thesis proposal, ongoing work and future plans have been presented towards an automatically built domain-specific SDS.",
"With increasing semantic resources, such as Google's Knowledge Graph and Microsoft Satori, the dissertation shows the feasibility that utilizing available knowledge improves the generalization and the scalability of dialogue system development for practical usage."
]
}
|
{
"paper_header_number": [
"1",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"The Proposed Work",
"Knowledge Acquisition",
"Ontology Induction",
"Structure Learning",
"SLU Modeling by Matrix Factorization",
"Feature Model",
"Relation Propagation Model",
"Integrated Model",
"Model Learning",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-48#paper-1077#slide-20
|
Conclusions
|
Ontology induction and knowledge graph construction enable systems to automatically acquire open domain knowledge.
MF for SLU provides a principle model that is able to
unify the automatically acquired knowledge
adapt to a domain-specific setting
and then allows systems to consider implicit semantics for better understanding.
The work shows the feasibility and the potential of improving generalization, maintenance, efficiency, and scalability of SDSs.
The proposed unsupervised SLU achieves 43% of MAP on ASR-transcribed conversations.
|
Ontology induction and knowledge graph construction enable systems to automatically acquire open domain knowledge.
MF for SLU provides a principle model that is able to
unify the automatically acquired knowledge
adapt to a domain-specific setting
and then allows systems to consider implicit semantics for better understanding.
The work shows the feasibility and the potential of improving generalization, maintenance, efficiency, and scalability of SDSs.
The proposed unsupervised SLU achieves 43% of MAP on ASR-transcribed conversations.
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-0
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-0
|
The Multilingual FrameNet Project
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Organize and align existing FrameNet-like projects in 8-10 languages
Provide a multilingual language resource to NLP research, language teachers, etc.
Improve access to and understanding of FrameNet data from all languages (both lexicon and annotated texts)
What data structures are appropriate for the new resource?
How universal are semantic frames? What are implications for MT, cross-linguistic IE & IR, etc.?
How can graph methods help us achieve these goals? We hope to receive suggestions from the TextGraph community
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Organize and align existing FrameNet-like projects in 8-10 languages
Provide a multilingual language resource to NLP research, language teachers, etc.
Improve access to and understanding of FrameNet data from all languages (both lexicon and annotated texts)
What data structures are appropriate for the new resource?
How universal are semantic frames? What are implications for MT, cross-linguistic IE & IR, etc.?
How can graph methods help us achieve these goals? We hope to receive suggestions from the TextGraph community
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-1
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-1
|
Frames Frame elements Lemmas and Lexical units
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Frames and Frame Elements (FEs)
Frames and Lexical Units (LUs)
Take place of: admire.v, contempt.n, stigmatize.v, reverence.n
replace.v, replacement.n, take place of.v
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Frames and Frame Elements (FEs)
Frames and Lexical Units (LUs)
Take place of: admire.v, contempt.n, stigmatize.v, reverence.n
replace.v, replacement.n, take place of.v
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-2
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-2
|
Frames Frame elements Lemmas and Lexical units as a graph
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-3
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-3
|
Frame relations
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Perspective on (full example)
Causative of, Inchoative of
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Perspective on (full example)
Causative of, Inchoative of
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-4
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-4
|
Perspective on frame relations
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Note that reality is more complex; Quitting and Firing are not the same kind of event, there are many ways employment can end: resigning under pressure, retirement, etc.
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Note that reality is more complex; Quitting and Firing are not the same kind of event, there are many ways employment can end: resigning under pressure, retirement, etc.
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-5
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-5
|
Frame Grapher
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-6
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-6
|
Graph of FrameNet semantic types partial
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Artifact Living_thing Location Body_part Container
Structure Animate_being Region Point Line
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Artifact Living_thing Location Body_part Container
Structure Animate_being Region Point Line
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-7
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-7
|
FN Annotation Annotators view
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-8
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-8
|
Grammatical Function Phrase Type and Other layers
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Construction Grammar is presupposed in FN syntactic analysis, but not fully explicit in the annotation.
NP, VPto, AdjP, etc.
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Construction Grammar is presupposed in FN syntactic analysis, but not fully explicit in the annotation.
NP, VPto, AdjP, etc.
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-9
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-9
|
An English sentence for analysis
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
We will be looking at (a clause from) a sentence from a TED talk by Ken Robinson: Do Schools Kill Creativity?:
The thing they were good at at school was not valued or was actually stigmatized.
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
We will be looking at (a clause from) a sentence from a TED talk by Ken Robinson: Do Schools Kill Creativity?:
The thing they were good at at school was not valued or was actually stigmatized.
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-10
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-10
|
Frame shifts in translation
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
We examined frames in two different semantic domains, in two documents with different styles of translation:
Sherlock Holmes, The Hound of the Baskervilles
(professional, literary translation) Motion events
TED, Do Schools Kill Creativity? (volunteer, literal translation) Motion and Communication events
Source Langs Domain Same Partial Diff. Total
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
We examined frames in two different semantic domains, in two documents with different styles of translation:
Sherlock Holmes, The Hound of the Baskervilles
(professional, literary translation) Motion events
TED, Do Schools Kill Creativity? (volunteer, literal translation) Motion and Communication events
Source Langs Domain Same Partial Diff. Total
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-11
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-11
|
Uses of Graph methods with Frame Semantic Annotation and Parsing
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Visualize of complex relations, including cross-lingual relations
Query with graph expressions (e.g. using Neo4j DB)
Express constraints as graph unification ( Construction grammar)
Summarize valences (Kernel Dependency Graphs, cf.
S or VP Judgement
Ext Cognizer T Obj Dep Evaluee Reason
NP admire NP PPing
|
The FrameNet lexical database as a set of graphs FN annotation Sentences Conclusions References
Visualize of complex relations, including cross-lingual relations
Query with graph expressions (e.g. using Neo4j DB)
Express constraints as graph unification ( Construction grammar)
Summarize valences (Kernel Dependency Graphs, cf.
S or VP Judgement
Ext Cognizer T Obj Dep Evaluee Reason
NP admire NP PPing
|
[] |
GEM-SciDuet-train-49#paper-1078#slide-12
|
1078
|
Graph Methods for Multilingual FrameNets
|
This paper introduces a new, graphbased view of the data of the FrameNet project, which we hope will make it easier to understand the mixture of semantic and syntactic information contained in FrameNet annotation. We show how English FrameNet and other Frame Semantic resources can be represented as sets of interconnected graphs of frames, frame elements, semantic types, and annotated instances of them in text. We display examples of the new graphical representation based on the annotations, which combine Frame Semantics and Construction Grammar, thus capturing most of the syntax and semantics of each sentence. We consider how graph theory could help researchers to make better use of FrameNet data for tasks such as automatic Frame Semantic role labeling, paraphrasing, and translation. Finally, we describe the development of FrameNet-like lexical resources for other languages in the current Multilingual FrameNet project. which seeks to discover cross-lingual alignments, both in the lexicon (for frames and lexical units within frames) and across parallel or comparable texts. We conclude with an example showing graphically the semantic and syntactic similarities and differences between parallel sentences in English and Japanese. We will release software for displaying such graphs from the current data releases.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"paper_content_text": [
"Overview In this paper, we provide a new graph-based display of FrameNet annotation, which we hope will make the complex data model of FrameNet more accessible to a variety of users.",
"We begin with a brief introduction to the Frame Semantics and the FrameNet project and their underlying graph structures.",
"Section 3 illustrates how annotation maps words in sentences to nodes in FrameNet, showing the struture of a sentence in the new graph representation.",
"Sect.",
"4 discusses how the graph representation could help NLP developers, particularly w.r.t.",
"automatic semantic role labeling.",
"In Sect.",
"5, we introduce the Multilingual FrameNet project, and what comparisons of frame structures across languages might reveal by way of another example sentence in the new format, then discuss our conclusions and acknowledge support for our work.",
"Frame Semantics and English FrameNet The FrameNet Project Baker, 2010, Ruppenhofer et al., 2016] at the International Computer Science Institute (ICSI) is an ongoing project to produce a lexicon of English that is both human-and machine-readable, based on the theory of Frame Semantics developed by Charles Fillmore and colleagues [Fillmore, 1997] and supported by annotating corpus examples of the lexical items.",
"Although FrameNet (FN) is a lexical resource, it is organized not around words, but rather the roughly 1,200 semantic frames [Fillmore, 1976] : characterizations of events, relations, states and entities which are the conceptual basis for understanding the word senses, called lexical units (LUs).",
"Frames are distinguished by the set of roles involved, known as frame elements (FEs).",
"Defining individual lexical units relative to semantic frames provides a crucial level of generalization for their meaning and use.",
"Much of the information in FN is derived from the more than 200,000 manually annotated corpus sentences; annotators not only mark the target word which evokes the frame, but also those phrases which are syntactically related to the target word and express its frame elements.",
"FN covers roughly 13,500 LUs, and provides very rich syntagmatic information about the combinatorial possibilities of each LU.",
"Each frame averages about 10 frame elements, and the same frame can be evoked by words (or multiword expressions) of any part of speech.",
"FrameNet frames are connected by eight types of relations, including full inheritance (IS-A relation) in which all core FEs are inherited, weaker forms of inheritance (called Using and Perspective on), and relations between statives, inchoatives, and causatives.",
"Most frames are linked in a single large lattice (analyzed in Valverde-Albacete [2008] ).",
"The full graph is difficult to render, but can be browsed at https://framenet.icsi.berkeley.",
"edu/fndrupal/FrameGrapher FrameNet also has a small hierarchy of semantic types which can be marked on Frames, FEs and LUs; a portion is shown in Fig.",
"1 .",
"Many of the semantic types in FrameNet are similar to nodes in widely used ontologies, but they are limited to those which are linguistically important; for example, most agent FEs (not only those called \"Agent\", but all those descended from the AGENT FE in the high-level frame Intentionally act) have the semantic type SENTIENT (Non-sentient actants receive the FE CAUSE).",
"1 Some semantic 1 Matching FE semantic types to fillers is complicated by phenomena such as metonymy (The White house announced today .",
".",
". )",
"and personification (She still runs good, but eventually she'll need new tires.",
"), not fully addressed in FN.",
"types add information which cross-cuts the frame hierarchy; e.g., POSITIVE JUDGEMENT and NEG-ATIVE JUDGEMENT are used to separate those LUs in the frames Judgement, Judgement communication and Judgement direct address that have positive affect from those with negative affect.",
"Frame Semantic and Construction Grammar representation of sentence meaning The development of Frame Semantics has gone hand in hand with the development of Construction Grammar, by Fillmore and a wide range of colleagues (Michaelis [2010] , Feldman et al.",
"[2010] ).",
"FrameNet annotators not only mark which spans of the corpus sentences instantiate which Frame Elements, but also the phrase type (PT) of the constituent that covers that span 2 and the grammatical function (GF, a.k.a.",
"grammatical relation) between that constituent and the target instance of the lexical unit as a coextensive set of spans on three annotation \"layers\".",
"Additional information is added on other \"layers\" indicating the presence of copulas and other support verbs, the antecedents of relative clauses, etc.",
"This syntactic information, based on Construction Grammar, can be combined with the FE labels to form a joint syntactico-semantic representation of much of the meaning of a sentence.",
"In graph terms, the annotation process creates a mapping between the string of characters in the sentence and (1) nodes representing frame elements in the frame hierarchy and (2) nodes representing parts of constructions in the Construction Grammar hierarchy.",
"We illustrate this with an example sentence extracted from a TED talk entitled \"Do schools kill creativity?\"",
"by Ken Robinson 3 : The thing they were good at in school wasn't valued, or was actually stigmatized.",
"The graph representation derived from FrameNet annotation is shown in Fig.",
"2 .",
"4 In this figure, the nodes of the graph are syntactico-semantic entities (solid borders) or semantic entities (dotted borders) and the words of the sentence are the terminal nodes of the graph (in boxes).",
"Each edge specifies the relationship between nodes, solid black for syntactico-semantic Though not shown in this graph, each frame instance is also linked to the frame hierarchy graph (Sec.",
"2).",
"The edges descending from the frames semantically represent the relations described by Frame Elements in the same hierarchy.",
"The dotted lines pointing to dotted nodes are links into the semantic type hierarchy (Sec.",
"2.1).",
"The syntactic features of the non-terminal nodes are summarized by Phrase Type (PT) labels (S, N, NP, V, VP, PP, etc.",
"with their conventional meanings) and part-ofspeech (not shown).",
"Other features on the edges are syntactico-semantic categories: T (target, the word(s) that evokes the frame), RelC (relative clause), Ant.",
"(antecedent of relative clause), Head (syntactic and semantic head), Sem H (semantic head), and Supp (support, a syntactic head).",
"Applications of FrameNet data as a graph The ability to separate syntactic and semantic dependency is potentially of use in many tasks involving FrameNet data, including automatic semantic role labeling (ASRL), inferencing, language generation, and cross-linguistic comparison.",
"Because of the clear representation of syntactic and semantic dependency in the graph (displayed in Fig.",
"2 by vertical position, arrow direction, and non-local edges), many tasks should be able to use the graph even without special processing for the subtypes of edges, e.g.",
"for relative clauses as seen under NP[3].",
"To find out the overall meaning of this sentence, one can start from the \"S\" node and follow the edges marked \"Head\" or \"Sem H\" to the two instances of the Judgement frame.",
"From there, the application can drill further down as needed, into the frame hierarchy, the semantic type hierarchy, or the fillers of the frame roles.",
"One task in particular that could use the full power of such graphs is automatic semantic role labeling (ASRL).",
"The high cost of expert semantic annotation has spurred interest in building ASRL systems.",
"Much of this has been based on the Prop-Bank [Palmer et al., 2005] style of annotation, but work on Frame Semantic role labelers has continued, with increasing success (Das et al.",
"[2014] , Roth and Lapata [2015] Figure 3 : Frame Semantic Annotation of Equivalent Japanese Sentence generally reflect the effort those researchers have made to understand the FrameNet data in depth, including dependencies between semantic roles within a frame, propagation of semantic types across frames, and dependencies between syntax and semantics in a specific sentence.",
"When Frame Element annotation is treated simply as independent tags for machine learning (even if syntactic information is imported from other sources), the learning algorithms are starved of the information needed to make smarter generalizations about the large proportion of the syntactic information about each lexical unit that is predictable from other lexical units in the frame, other related frames, or structures of the language as a whole, such as passivization and relative clause structure.",
"The current distribution format of the FrameNet data does not make this clear.",
"Since FrameNet data is basically discrete and categorial, treating it as an interlocking set of graphs should enable better use of all the information, explicit and implicit, in FrameNet.",
"Multilingual FrameNet The development of the FrameNet resource at ICSI has inspired the creation of a number of Frame Semantics-based projects for other languages: efforts on Spanish, German, Japanese, Chinese, Swedish, Brazilian Portuguese, and French have all received substantial funding, primarily from their national or provincial governments.",
"The basic research question is: to what extent are the semantic frames universal and to what extent are they language-specific?",
"Even if equivalent frames exist in two languages, how much of the frame structure will be preserved in translation?",
"If a different frame is used, is it a near neighbor via frame relations in one or both of the languages?",
"These questions have also been discussed by, e.g.",
"Boas [2009],Čulo [2013] , andČulo and de Melo [2012] .",
"The sentence in Fig.",
"2 is part of an experiment in annotation of parallel texts; TED talks were chosen because translations are freely available in all of these languages.",
"The TED talk translations are done by volunteers, so they may not be of professional quality, but this is a common situation on the web today, which NLP research has to deal with.",
"In general the TED talk translations tend to be fairly \"literal\", so we would expect that the frames would be very similar across languages.",
"However, frame differences occur even here.",
"E.g, in the graph of the Japanese translation of this sentence (shown in Fig.",
"3) , the first conjunct has the Judgement frame like the English, but the second instance of Judgement in English is translated by the frame Labeling in Japanese.",
"Here the agent of the labeling is the school, pre-sumably metonymic for either the faculty, the students, or both.",
"5 Thus, the graph representation of the FrameNet data helps to make clear which parts of the sentences to compare across languages.",
"We hope that ultimately such comparisons will lead to graph-based MT systems that can transfer meaning at a deeper level.",
"One of the goals of the Multilingual FrameNet project is to quantify the patterns of frame occurrence across varied languages.",
"The new annotation of parallel texts has just begun, so we the number of instances of frames is still small, but we can report some suggestive results based on comparing the annotation of verbs of motion in two texts.",
"One is the TED talk, where we have annotation for English and Brazilian Portuguese; the other is a chapter of the Sherlock Holmes story \"The Hound of the Baskervilles\", translated by professional translators, where we compare annotation in English and Spanish.",
"(We some annotation previously on these texts in English, Spanish, Japanese and German, but not Portuguese.)",
"Name Lang Same Partial Diff.",
"Tot.",
"TED EN-PT 38 4 22 64 Hound EN-ES 33 3 23 59 Table 1 : Frame similarity and difference across parallel texts Table 1 gives the counts for instances of verbs of motion in two texts, showing cases where the aligned verbs are the same or different across languages.",
"We had hypothesized that the professional, literary translations of the \"Hound\" text would have more cross-linguistic differences, while that the volunteer translations of the TED talks would be more often frame-preserving.",
"The counts shown here conform to that expectation, but the differences are not conclusive.",
"Conclusion FrameNet data is extremely rich, but not usually presented in a form that is easy for use in NLP.",
"There are clear advantages to viewing the FrameNet annotation data as a graph that separates out entities (nodes) from relations (edges) and clarifies which information is semantic, syntactic, or both.",
"The semantic information can be cleanly integrated with FrameNet's already elaborate graph of frames and semantic types, while generalizations over syntactic information should enable improved use of FrameNet annotation in ASRL training and cross-linguistic comparison.",
"Acknowledgements"
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"6"
],
"paper_header_content": [
"Overview",
"Frame Semantics and English FrameNet",
"Frame Semantic and Construction",
"Applications of FrameNet data as a graph",
"Multilingual FrameNet",
"Conclusion"
]
}
|
GEM-SciDuet-train-49#paper-1078#slide-12
|
Conclusions
|
The current XML format is too close to the DB structure, less than optimal for both humans and machines
A more perspicuous representation would help collaboration in Multilingual FrameNet and NLP research more generally
Graphs can serve this purpose
We welcome your suggestions about how we can make better use of graph representations!
|
The current XML format is too close to the DB structure, less than optimal for both humans and machines
A more perspicuous representation would help collaboration in Multilingual FrameNet and NLP research more generally
Graphs can serve this purpose
We welcome your suggestions about how we can make better use of graph representations!
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-0
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-0
|
Community Question Answering Service
|
A dog kept in the next house barks from morning to night.
Post Question Answer of User B
No solution other than moving.
How can I effectively manage this problem?
Please contact the public health center.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
A dog kept in the next house barks from morning to night.
Post Question Answer of User B
No solution other than moving.
How can I effectively manage this problem?
Please contact the public health center.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-1
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-1
|
Push Notification in CQA
|
Push Notification obtain quick answers Push Notification of Question
Directly linked to the quality of CQA
Contents of Posted Question
Notification Headline of Posted Question
Yahoo! Chiebukuro Respondent Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Push Notification obtain quick answers Push Notification of Question
Directly linked to the quality of CQA
Contents of Posted Question
Notification Headline of Posted Question
Yahoo! Chiebukuro Respondent Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-2
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-2
|
Snippet Extraction Makes Headline Informative
|
Extract a mid-substring as a snippet
Nice to meet you, thank you in advance. I'm a man in my thirties.
...A dog kept in the next house barks from morning...
to night. Neighbors have given the owner cautions against it,
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Extract a mid-substring as a snippet
Nice to meet you, thank you in advance. I'm a man in my thirties.
...A dog kept in the next house barks from morning...
to night. Neighbors have given the owner cautions against it,
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-3
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-3
|
Contributions
|
Show empirical evidence that snippet headlines are more effective than prefix headlines
Propose extractive headline generation method based on learning to rank
Create Japanese dataset including headline candidates with
"headline-ness" scores by crowdsourcing
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Show empirical evidence that snippet headlines are more effective than prefix headlines
Propose extractive headline generation method based on learning to rank
Create Japanese dataset including headline candidates with
"headline-ness" scores by crowdsourcing
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-4
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-4
|
Advantages of Snippet Headlines
|
Snippet headlines never include generative errors
Headline accept generative errors
A/B testing on Yahoo! Chiebukuro push notifications of smartphones completely avoid generative
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Snippet headlines never include generative errors
Headline accept generative errors
A/B testing on Yahoo! Chiebukuro push notifications of smartphones completely avoid generative
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-5
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-5
|
Related Work of Headline Generation and CQA
|
Our research is first attempt to address extractive headline generation for
CQA service with substring of question based on learning to rank
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Our research is first attempt to address extractive headline generation for
CQA service with substring of question based on learning to rank
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-6
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-6
|
Overview of Our Proposed Method
|
Nice to meet you, thank you in advance. I'm a man in my thirties.
...A dog kept in the next house barks from morning to night...
Question Candidates Ranked Headline Candidate Generation Candidate Ranking
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Nice to meet you, thank you in advance. I'm a man in my thirties.
...A dog kept in the next house barks from morning to night...
Question Candidates Ranked Headline Candidate Generation Candidate Ranking
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-7
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-7
|
Candidate Generation
|
C ut subsequent s entences if over 20
Nice to meet you, thank you in advance, I'm a... J apanese characters
Ellips is Elli psis
... Advice please. A dog kept in the next house ...
... A dog kept in the next house barks from morning
Make sentence which starts from beginning of each sentences of question.
Cut subsequent sentences if it has over 20 Japanese characters.
Put ellipsis at front and end of substring.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
C ut subsequent s entences if over 20
Nice to meet you, thank you in advance, I'm a... J apanese characters
Ellips is Elli psis
... Advice please. A dog kept in the next house ...
... A dog kept in the next house barks from morning
Make sentence which starts from beginning of each sentences of question.
Cut subsequent sentences if it has over 20 Japanese characters.
Put ellipsis at front and end of substring.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-8
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-8
|
Candidate Ranking
|
Pairwise Learning to Rank
L2-regularized L2-loss linear rankSVM
[Lee 2014] Lee, ChingPei and Lin, ChihJen: Largescale Linear Ranksvm. Neural Computation, 26(4)
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Pairwise Learning to Rank
L2-regularized L2-loss linear rankSVM
[Lee 2014] Lee, ChingPei and Lin, ChihJen: Largescale Linear Ranksvm. Neural Computation, 26(4)
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-9
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-9
|
Data Creation Crowdsourcing
|
Select the best option from the list so that users can guess the content of the question and distinguish it from other ones.
Randomly Sorted Headline Candidate score
Neighbors have given the owner cautions against
This area has only private houses, not rented ...
How can I effectively manage this problem?
However, I will go crazy if I have to keep enduring
A dog kept in the next house barks from morning
Nice to meet you, thank you in advance, I'm a ...
... Advice please. A dog kept in the next house ...
Number of votes by
10 workers per question
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Select the best option from the list so that users can guess the content of the question and distinguish it from other ones.
Randomly Sorted Headline Candidate score
Neighbors have given the owner cautions against
This area has only private houses, not rented ...
How can I effectively manage this problem?
However, I will go crazy if I have to keep enduring
A dog kept in the next house barks from morning
Nice to meet you, thank you in advance, I'm a ...
... Advice please. A dog kept in the next house ...
Number of votes by
10 workers per question
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-10
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-10
|
Crowdsourcing Results
|
Ratio of questions whose prefix headlines were most voted
Room for improvement for prefix headline was up to
Improve uninformative headlines of 38.2%
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Ratio of questions whose prefix headlines were most voted
Room for improvement for prefix headline was up to
Improve uninformative headlines of 38.2%
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-11
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-11
|
Features for Ranking Model
|
Bag-of-Words: 30,820 dimension sparse vector based on tf-idf
Embedding: 100 dimension dense vector based on doc2vec
Position: 10 dimension binary vector representing candidate position
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Bag-of-Words: 30,820 dimension sparse vector based on tf-idf
Embedding: 100 dimension dense vector based on doc2vec
Position: 10 dimension binary vector representing candidate position
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-12
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-12
|
Compared Methods
|
Prefix: Select first candidate
DictDel: Delete uninformative sentence with rule (Used in A/B testing)
ImpTfidf: Select most important candidate with highest tf-idf value
SimTfidf: Select most similar candidate to original question with cosine similarity
LexRank: Select candidate with highest score based on LexRank (Erkan&Radev 2004)
SVM: Select candidate with highest confidence learned as classification task SVR: Select candidate with highest predicted votes learned as regression task
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Prefix: Select first candidate
DictDel: Delete uninformative sentence with rule (Used in A/B testing)
ImpTfidf: Select most important candidate with highest tf-idf value
SimTfidf: Select most similar candidate to original question with cosine similarity
LexRank: Select candidate with highest score based on LexRank (Erkan&Radev 2004)
SVM: Select candidate with highest confidence learned as classification task SVR: Select candidate with highest predicted votes learned as regression task
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-13
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-13
|
Evaluation Metrics
|
Measures how appropriate candidates selected by each method
Determines the overall performance of each method
Measures how much each method changed the default prefix headline
Determines the impact of application to actual CQA service.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Measures how appropriate candidates selected by each method
Determines the overall performance of each method
Measures how much each method changed the default prefix headline
Determines the impact of application to actual CQA service.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-14
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-14
|
Results Quantitative Analysis
|
Method Avarage Votes Change Rate
MLRank(ours) performed the best among all methods.
Prefix(First sentence) can be a good summary.
Random DictDel(Rule-Based) was more useful than Prefix. ImpTfidf
Change rate of DictDel was small, which means small impact on service. SVM
SVR Change rates of unsupervised
methods were high, but the overall performances were low
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Method Avarage Votes Change Rate
MLRank(ours) performed the best among all methods.
Prefix(First sentence) can be a good summary.
Random DictDel(Rule-Based) was more useful than Prefix. ImpTfidf
Change rate of DictDel was small, which means small impact on service. SVM
SVR Change rates of unsupervised
methods were high, but the overall performances were low
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-15
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-15
|
Results Qualitative Analysis
|
Examples of prefix headline and snippet headline
Prefix Headline Snippet Headline
I I am am sorry sorry if if the the category category is is wrong. wrong. ... Now, my wallet is torn, and Im and Im
Now, my wallet is torn having having a a hard hard time. time. A A new new one one
I I am am a a 27-year-old 27-year-old woman. woman. Owing to ... Owing to my environment, there is my environment, there is little chance of little chance of new encounters with encounters with men men
Uninformative expressions are successfully excluded, and informative experssions are added
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Examples of prefix headline and snippet headline
Prefix Headline Snippet Headline
I I am am sorry sorry if if the the category category is is wrong. wrong. ... Now, my wallet is torn, and Im and Im
Now, my wallet is torn having having a a hard hard time. time. A A new new one one
I I am am a a 27-year-old 27-year-old woman. woman. Owing to ... Owing to my environment, there is my environment, there is little chance of little chance of new encounters with encounters with men men
Uninformative expressions are successfully excluded, and informative experssions are added
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-50#paper-1080#slide-16
|
1080
|
Extractive Headline Generation Based on Learning to Rank for Community Question Answering
|
User-generated content such as the questions on community question answering (CQA) forums does not always come with appropriate headlines, in contrast to the news articles used in various headline generation tasks. In such cases, we cannot use paired supervised data, e.g., pairs of articles and headlines, to learn a headline generation model. To overcome this problem, we propose an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline. Experimental results show that our method outperforms several baselines, including a prefix-based method, which is widely used in real services.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263
],
"paper_content_text": [
"Introduction Community question answering (CQA) is a service where users can post their questions and answer the questions from other users.",
"The quality of a CQA service is inevitably linked to how many questions are answered in a short amount of time.",
"To this end, the headline of a posted question plays a key role, as it is the first thing users see in a question list or push notification on a smartphone.",
"However, questions in a CQA service do not always have appropriate headlines because the questions are written by various users who basically do not have any specialized knowledge in terms of writing such content, in contrast to news articles written by professional editors.",
"In fact, the biggest CQA service in Japan, Yahoo!",
"Chiebukuro 1 , does not even provide an input field for headlines in the submission form of questions, as general users do not have enough patience and tend not to post questions if even just one required field is added to the form.",
"This service alternatively uses the prefix of a question as its headline as in Figure 1(a) , where the headline \" (Nice to meet you.",
"Thank you in advance.",
"At work ...)\" is created from the prefix of the content.",
"Obviously, this headline is uninformative because of the lack of actual content, which is related to an initial unrequited love of a woman in the workplace.",
"Figure 1(b) shows how ineffective such an uninformative headline is for a smartphone push notification, where users would have practically zero motivation to click on it and answer since they cannot imagine what kind of question is being asked.",
"This negative effect has been confirmed on a commercial CQA service (Section 2).",
"In this work, we take an extractive approach for improving uninformative headlines.",
"Although there have recently been many studies on abstractive headline generation (as described in Section 6), we do not follow any of these approaches because the content we deal with has no correct headlines and also the abstractive methods can yield erroneous output.",
"This latter issue is important in practical terms because correct output is critical for a commercial service.",
"If by some chance an erroneous headline, or one including politically incorrect phrases, is sent to all users, the service's credibility can be lost instantly.",
"Therefore, we formalize our task as an extraction problem of a fixed-length substring from a question as its headline.",
"In this setting, we can assume that outputs never include such errors caused by the service, as the outputs are substrings of user-posted questions.",
"Note that they have no coherence errors in selecting multiple sentences as in normal extractive summarization tasks.",
"While it is true that these outputs might contain inappropriate expressions authored by users, this type of error is beyond our scope since it is a different inevitable problem.",
"Furthermore, the situation of \"the service generated an inappropriate expression by itself\" is significantly worse than the situation of \"a user posted an inappropriate question to the service, and the service displayed it\".",
"Therefore, it is difficult to directly use abstractive methods for commercial services from a business standpoint.",
"Our approach involves preparing headline candidates and ranking them.",
"The formal description is as follows.",
"Let q be a target question to be translated into a headline.",
"We first prepare a set S(q) of headline candidates from the question q.",
"Note that the set S(q) is restricted to a set of fixed-length substrings given a length n, i.e., S(q) ⊆ {x | x ⪯ q, |x| = n}, where x ⪯ q means that x is a substring of q.",
"Then we extract the best headline that maximizes a score function f q (x), which represents the \"headline-ness\" score of a candidate x ∈ S(q) with respect to the target question q, as follows: argmax x∈S(q) f q (x).",
"(1) To ensure simplicity of implementation and understandability from users, we use a set of the fixed-length prefixes of all sentences in a question q as the candidate set S(q) (Section 3).",
"Because the problem is to select the best candidate from among several targets, the score function f q (x) is naturally trained by learning to rank (Section 4).",
"The main contributions of this paper are as follows.",
"• We report empirical evidence of the negative effect of uninformative headlines on a commercial CQA service (Section 2).",
"We additionally show that our task can reduce uninformative headlines by using simple dictionary matching, which dramatically improves the average answer rate of questions by as much as 2.4 times.",
"• We propose an extractive headline generation method based on learning to rank for CQA (Section 4) that extracts the most informative substring (prefix of mid-sentence) from each question as its headline.",
"To the best of our knowledge, our work is the first attempt to address such a task from a practical standpoint, although there have been many related studies (Section 6).",
"Experimental results show that our method outperforms several baselines, including the dictionary-based method (Section 5).",
"• We create a dataset for our headline generation task (Section 3), where headline candidates extracted from questions are ranked by crowdsourcing with respect to \"headline-ness\", that is, whether or not each headline candidate is appropriate for the headline of the corresponding question.",
"Negative Effect of Uninformative Headlines We conducted A/B testing on the push notifications of smartphones in collaboration with Yahoo!",
"Chiebukuro, as shown in Figure 1 (b).",
"We first prepared a dictionary of typical first sentences that cause uninformative headlines.",
"This dictionary was manually selected from frequent first sentences and consists of 913 sentences including greetings such as \" (Good morning)\", \" Changed uninformative Unchanged uninformative Informative 0.75% 0.31% 0.45% Table 1 : Average answer rates of three question groups in A/B testing.",
"(Good afternoon)\", and \" (Nice to meet you)\", and fixed phrases such as \" (Can I ask you something)\", \" (Please tell me)\", and \" (Thank you in advance)\".",
"We assumed that a question with a prefix match in the dictionary has an uninformative prefix headline and classified such questions into an uninformative group.",
"For convenience, we also classified the other questions into an informative group, although they might include not so informative headlines.",
"We further randomly divided the uninformative group into two equal groups: changed and unchanged.",
"In the changed uninformative group, each headline is extracted as the prefix of the first (informative) sentence that does not match with the dictionary, which is the same as DictDel explained in Section 5.2.",
"The unchanged group remains in uninformative.",
"For comparison of these groups, we used the average answer rate over notified questions in each group as an evaluation measure, defined as Average answer rate = No.",
"of questions answered from the notification No.",
"of notified questions .",
"(2) Note that we use a percentage expression (%) for easy reading.",
"Table 1 shows the evaluation results of the A/B testing during a 1-month period (Feb. 2 -Mar.",
"4, 2018) , where about three million questions were sent to users.",
"Comparing the unchanged uninformative group with the informative group, we can see that the average answer rate of the uninformative questions, 0.31%, is actually lower than that of the informative questions, 0.45%.",
"Comparing the changed and unchanged uninformative groups, the average answer rate of the changed questions, 0.75%, is much higher than that of the unchanged questions, 0.31%.",
"This means that even a simple dictionary-based method can dramatically improve the quality of the uninformative headlines, i.e., by as much as 2.4 times.",
"We confirmed that the difference is statistically significant on a one-tailed Wilcoxon signed-rank test (p < 0.05).",
"Note that the average answer rate represents a conversion rate (or rate of target actions), which is more important than a click-through rate (or rate of initial actions).",
"The average answer rate is one of the most important indicators for a CQA service, while the click-through rate can be meaninglessly high if the service sends headlines that are fake or too catchy.",
"We should point out that low answer rates are sufficient for the service, since it has 44M users: i.e., each question has an average of 2.4 answers, as the service currently has 189M questions and 462M answers.",
"Dataset Creation We created a dataset for our headline generation task based on the Yahoo!",
"Chiebukuro dataset 2 , which is a dataset including questions and answers provided from a Japanese CQA service, Yahoo!",
"Chiebukuro.",
"We first prepared headline candidates from this dataset as in Section 3.1 and then conducted a crowdsourcing task specified in Section 3.2.",
"In Section 3.3, we report the results of the crowdsourcing task.",
"Preparation of Headline Candidates We extracted only questions from the Chiebukuro dataset and split each question into sentences by using punctuation marks (i.e., the exclamation (\" \"), question (\" \"), and full stop (\" \") marks).",
"We regarded 20 Japanese characters that are basically extracted from each sentence as a headline candidate x ∈ S(q) in Eq.",
"(1), since this setting is used for push notifications in the actual service in Figure 1 (b).",
"More specifically, the headline candidate is created as follows: 1.",
"If the sentence is the first one in the question, we extract the first 19 characters and put an ellipsis mark (\" \") at the end.",
"(a) Example of our crowdsourcing task.",
"Posted Question: Nice to meet you, I am a man in my 30s.",
"Please give me your advice on a pressing concern I have.",
"A dog kept in the next house barks from morning to night.",
"Neighbors have given the owner cautions against it, but there is no improvement.",
"This area has only private houses, not rented houses, so I cannot move out.",
"However, I will go crazy if I have to keep enduring this.",
"How can I effectively manage this problem?",
"...",
"Please give me your advice on a pressing concern ... (b) English translation of left example.",
"Figure 2 : Examples of (a) our crowdsourcing task and (b) its English translation.",
"In the case where the length of a candidate is less than 20 characters, we include some of the next sentence in order to maximize use of display space.",
"We included questions with more than five sentences for the purpose of efficiently collecting ranking information.",
"All told, we prepared 10,000 questions containing more than five headline candidates each.",
"Figure 2 shows an example of our crowdsourcing task (a) and its English translation (b).",
"This task involves a posted question and headline candidates corresponding to the question.",
"We asked workers to select the best candidate from options after reading the posted question.",
"A relative evaluation, where workers select the best candidate, was used instead of an absolute evaluation, where workers select a score from 0 to 10 for each candidate, because we wanted to obtain as accurate a headline as possible, and it might be difficult for workers to select an appropriate absolute score.",
"The workers were instructed as follows (English translation): Crowdsourcing Task Various candidate headlines are listed as options for a posted question in a Q&A service.",
"After reading the question, please select the best option from the list so that users can guess the content of the question and distinguish it from other ones.",
"Please remove uninformative ones such as greetings, self-introductions, and unspecific expressions.",
"We explain how to judge the appropriateness of each candidate by means of the example in Figure 2 .",
"After examining the posted question, we can assume that the most important content is \"he is annoyed by the barking of a dog kept in the next house\".",
"On the basis of this assumption, option 6 is the best one, since the headline \"A dog kept in the next house barks from morning\" is enough to help answerers deduce that \"the questioner is annoyed by the sound\".",
"Option 1 is inappropriate because although the answerers might be able to guess that \"the questioner cannot move out\", this matter is not the central one.",
"Option 2 is uninformative because it consists merely of greetings and self-introduction, and option 3, while a question sentence, is unspecific.",
"Option 4 enables answerers to guess that this question is related to an issue involving pets, but they cannot grasp the specific content.",
"Option 5 specifies a likely damage due to the trouble, but the reason (trouble) is more important for the answering than the result (damage).",
"Option 7 directly shows that \"the questioner is annoyed and wants some advice\", but answerers cannot understand why he/she is annoyed.",
"The detailed implementation of our task is as follows.",
"First we randomly sorted the candidates of each question (shown in Figure 2 (a)) to avoid position bias by the workers.",
"We included ten actual questions and a dummy question so that workers would always have to answer one dummy question per every ten actual questions.",
"A dummy question is a question with a clear answer inserted to eliminate fraud workers (i.e., workers who randomly select answers without actually reading them).",
"Each question was answered by ten workers so each headline candidate had a vote score from 0 to 10 representing whether or not the candidate was appropriate for a headline.",
"This task took nine days and was answered by 1,558 workers.",
"As a result, our dataset consists of 10,000 questions, each of which has more than five headline candidates with accompanying \"headline-ness\" scores.",
"Analysis of Crowdsourcing Results We analyzed our dataset to determine how much room for improvement our task has compared to the prefix headline.",
"It is well known that the first sentence can be a strong baseline in many summarization tasks, so the prefix headline is also expected to perform well.",
"Figure 3 shows the statistical information based on sentence position that includes the (a) ratio of the true best (most voted) candidates, (b) average votes, and (c) average rank (in order of votes) over the candidates at each sentence position.",
"Looking at the true best candidates (a), the ratio 61.8% for the 1st sentence clarifies the effectiveness of the prefix headline, as expected.",
"Conversely, we still have room for improvement for the prefix headline up to 38.2%.",
"Our goal is to improve the uninformative headlines of 38.2% while keeping the remaining 61.8% unchanged.",
"The other two figures (b) and (c) also support the above discussion.",
"Furthermore, we qualitatively checked the crowdsourcing quality.",
"Workers successfully eliminated uninformative candidates including greetings and self-introductions, while one or two workers sometimes chose ones that included a fixed phrase such as \"Please tell me ...\".",
"This is probably because workers had different criteria regarding the \"unspecific expressions\" described in the instructions.",
"Since we cannot enumerate all of the concrete bad examples, we ignore this phenomenon with the expectation that a learning algorithm will reduce its negative effect.",
"Proposed Method In this section, we explain how to construct a headline generation model from the dataset presented in Section 3.",
"We took a ranking approach, i.e., learning to rank, for our task, rather than a simple regression one, since estimating absolute scores is not required for our purpose.",
"Even if two headline candidates (of different questions) have the same expression, their votes can be significantly different since the votes in our dataset are based on relative evaluation.",
"For example, the best candidate for Figure 2 was No.",
"6, but it might not be selected in other questions such as \"A dog kept in the next house barks from morning.",
"Does anybody know why dogs generally want to bark?\".",
"Learning to rank is an application of machine learning that is typically used for ranking models in information retrieval systems.",
"The ranking models are basically learned from a supervised dataset consisting of triples (q, x, y), where q is a user's query, x is a document corresponding to q, and y is a relevance score of x with respect to q.",
"In this work, we formalize our task by regarding q, x, and y, as a posted question, a headline candidate, and a voted score in our dataset, respectively.",
"We used a pairwise ranking method that is also implemented as an instance of the well-known SVM tools LIBLINEAR and LIBSVM (Kuo et al., 2014) .",
"We used a linear model based on LIBLINEAR, an L2-regularized L2-loss linear rankSVM, for the experiments.",
"Let D be a dataset that consists of triples including a posted question q, a headline candidate x, and a voted score y, i.e., (q, x, y) ∈ D. In the pairwise ranking method, we train a ranking model as a binary classifier that determines whether the condition y i > y j is true or false for two candidates x i and x j in the same question (q i = q j ).",
"Specifically, we first define the index pairs of positive examples by P = {(i, j) | q i = q j , y i > y j , (x i , y i , q i ) ∈ D, (x j , y j , q j ) ∈ D}.",
"Note that we do not need to consider the negative examples N = {(j, i) | (i, j) ∈ P } since they yield the same formula as P in the optimization process.",
"The training of the pairwise ranking method is achieved by solving the following optimization problem using the set P of the index pairs: min w 1 2 w ⊤ w + C ∑ (i,j)∈P ℓ(w ⊤x i − w ⊤x j ), (3) where w is a weight vector to be learned,x i is a feature vector extracted from a headline candidate x, and C is the regularization parameter.",
"The function ℓ is a squared hinge loss, which is defined as ℓ(d) = max(0, 1 − d) 2 .",
"Finally, we define the score function in Eq.",
"(1) as f q (x) = w ⊤x , wherex can be created by using q as well as x.",
"This score means the relative \"headline-ness\" of x.",
"Experiments Basic Settings The basic settings of the experiments are as follows.",
"We split our dataset into training and test sets consisting of 9,000 and 1,000 examples, respectively.",
"We used an implementation 3 based on LIBLIN-EAR for training our ranking model, i.e., a linear L2-regularized L2-loss rankSVM model , as described in Section 4.",
"The regularization parameter was optimized by cross validation and set as C = 0.125.",
"The feature vector for a headline candidate consists of three kinds of features: bag-of-words, embedding, and position information.",
"The bag-of-words feature is a sparse vector of 30,820 dimensions based on the tf-idf scores of nouns, verbs, interjections, conjunctions, adverbs, and adjectives in a candidate, where we used a Japanese morphological analyzer, MeCab 4 (Kudo et al., 2004) , with a neologism dictionary, NEologd 5 (Toshinori Sato and Okumura, 2017).",
"The embedding feature is a dense vector of 100 dimensions based on a doc2vec model (Le and Mikolov, 2014) trained with all 3M sentences in the Chiebukuro dataset using the Gensim tool 6 .",
"The position feature is a binary vector of ten dimensions, where each dimension represents the coarse position (or coverage) of a headline candidate for a question.",
"Specifically, we equally split a question (character sequence) into ten parts and set one to each dimension if and only if the corresponding part overlaps a candidate.",
"For example, candidate No.",
"2 in Figure 2 had a position feature (1, 1, 0, · · · , 0), since the candidate covers the first 2/10 of the whole question.",
"Similarly, No.",
"6 and No.",
"3 had (0, 0, 1, 1, 0, · · · , 0) and (0, · · · , 0, 1), respectively.",
"For constructing the feature vector of each headline candidate, we used the previous and next candidates in sentence order, in addition to the target candidate.",
"This is based on the idea that near candidates might have useful information for the target candidate.",
"Finally, we prepared each feature vector by concatenating nine feature vectors, i.e., the above three kinds of features for three candidates, and normalizing them.",
"Compared Methods We compared our method, MLRank, with the baselines listed below.",
"Prefix, DictDel, and Random are simple baselines, while Prefix and DictDel are practically strong.",
"ImpTfidf, SimTfidf, SimEmb, and LexRank are unsupervised baselines, and SVM and SVR are supervised ones.",
"• Prefix: Selects the first candidate in sentence order.",
"• DictDel: Selects the first (informative) candidate that does not match in the dictionary of uninformative headlines (Section 2).",
"• Random: Randomly selects a candidate.",
"• ImpTfidf: Selects the most important candidate with the highest tf-idf value, where a tf-idf value is calculated by the sum of the elements in a bag-of-words feature (described in Section 5.1).",
"• SimTfidf: Selects the most similar candidate to the original question, which is calculated by the cosine similarity between the bag-of-words features (in Section 5.1) of each candidate and the question.",
"• SimEmb: An embedding-based variation of SimTfidf with embedding features (in Section 5.1).",
"• LexRank: Selects the candidate with the highest score based on LexRank 7 (Erkan and Radev, 2004) , which is a widely used unsupervised extractive summarization method based on the PageRank algorithm.",
"The graph expression of each question was constructed on the basis of cosine similarity of the tf-idf vectors corresponding to candidates.",
"• SVM: Selects the candidate with the highest confidence based on a model learned as a classification task, where candidates with nonzero votes were labeled as positive.",
"This setting was the best in our preliminary experiments.",
"We used the L2-regularized L2-loss support vector classification model (C = 0.0156) in LIBLINEAR.",
"The other settings were the same as those described in Section 5.1.",
"• SVR: Selects the candidate with the highest predicted votes based on a model learned as a regression task, where the target variable is the number of votes.",
"We used the L2-regularized L2-loss support vector regression model (C = 0.0625) in LIBLINEAR.",
"The other settings were the same as above.",
"• MLRank: Proposed method described in Section 4.",
"Evaluation Measures We defined three evaluation measures for evaluating each method on our headline generation task.",
"Change Rate from Prefix Headline We measured how much each method changed the default prefix headline in order to determine the effect of application to an actual CQA service.",
"We defined this measure as change rate from the prefix headline, as follows: Change rate = No.",
"of questions where the best candidate is not the prefix headline No.",
"of all questions .",
"Clearly, the change rate of the default method that selects the prefix headline is 0%.",
"If the value is small, the effect on the service will be small, but if the value is higher than the ideal change rate of 38.2% (Section 3), there can be side effects even if the average result is good.",
"A higher change rate up to the ideal rate is desirable from a practical standpoint.",
"Winning Rate against Prefix Headline We measured how much each method won against the prefix headline to directly assess the quality of changed headlines.",
"We defined this measure as winning rate against the prefix headline, as follows: Winning rate = No.",
"of questions where the best candidate got more votes than the prefix headline No.",
"of questions where the best candidate is not the prefix headline .",
"(5) We did not consider the first candidate (in sentence order) selected by each method, which is the same as the prefix headline, since they obviously have the same number of votes.",
"Average Votes We measured how appropriate the candidates selected by each method are in order to determine the overall performance.",
"We defined this measure as average votes, as follows: Average votes = Sum of votes for the best candidates for all questions No.",
"of questions .",
"Note that the average votes score is different from the average votes score for position (Figure 3(b) ) in that the former is the average over the selected candidates while the latter is the average over the candidates at a sentence position.",
"This measure is related to (normalized) discounted cumulative gain (DCG), which is widely used as an evaluation measure of ranking models.",
"We often use DCG@k for evaluating top-k rankings, and the above definition acutally corresponds to DCG@1.",
"According to a well-known paper (Järvelin and Kekäläinen, 2002) in the information retrieval field, DCG is appropriate for graded-relevance judgments like our task, while precision (described below) is appropriate for binary-relevance ones.",
"Average votes is expected to be more appropriate than precision for our task because we want \"averagely better headlines than default ones\" rather than \"best ones\" from a practical standpoint.",
"Precision Precision is a widely used evaluation measure for classification tasks, and we added it to support an evaluation based on average votes.",
"We defined it with respect to the best candidate, i.e., precision@1, as follows: Precision = No.",
"of questions where the best candidate had the maximum votes No.",
"of questions .",
"(7) Table 2 shows examples of headlines generated by the prefix method, Prefix, and the proposed method, MLRank.",
"Looking at the first example, we can see that our method successfully eliminated the selfintroduction phrase (\"I am a 27-year-old woman\").",
"The headline (right) of our method allows answerers to know that the questioner is discouraged about how to encounter men from the phrase \"little chance of new encounters with men\", while the headline (left) of the prefix method lacks this important clue.",
"The second and third examples show similar effects to the first example.",
"In the second one, although there are few clues about what the question is with the prefix method, our method correctly included the important clue (\"honeymoon\").",
"In the third one, our method appropriately eliminated the uninformative long phrase (\"I am sorry if the category is wrong\"), which is not a frequent fixed phrase.",
"The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question.",
"However, the headline of our method included the term \"winning bidder\", so at least the answerer can assume that the question is about some sort of auction trouble.",
"The fifth example is a clearly successful result, where our method extracted the main question point about \"welfare pension\" as a headline.",
"These results qualitatively demonstrate the effectiveness of our method.",
"Results Qualitative Analysis Quantitative Analysis We compared our method MLRank with the baselines in Section 5.2 on the headline generation task for our dataset in Section 3.",
"Table 3 shows the evaluation results based on the change rates, winning rates, average votes, and precision.",
"Looking at the average votes and precision, which represent the overall performances, our method MLRank clearly performed the best among all methods.",
"We confirmed that Table 3 : Evaluation results of our headline generation task for proposed method MLRank and baselines.",
"the relative improvement of the average votes of our method MLRank against every baseline including the prefix method Prefix is statistically significant on the basis of a one-tailed Wilcoxon signed-rank test (p < 0.01).",
"The change and winning rates of our method are 9.9% and 94.9%, respectively.",
"This means that our method detected 9.9% of the uninformative headlines and improved them with the high accuracy of 94.9%.",
"In other words, our method could successfully improve the overall performance while simultaneously avoiding any negative side effects.",
"The ideal results (Ref) based on correct labels suggest that our method still has room for improvement, especially for the change rate.",
"The results of the other baselines are as follows.",
"Not surprisingly, the prefix method Prefix performed well.",
"This is consistent with the fact that the first sentence can be a good summary in many summarization tasks.",
"The random method Random performed the worst, also as expected.",
"The dictionarybased deletion method DictDel was relatively useful, although the change rate was small.",
"The reason the winning rate of DictDel is relatively low compared with MLRank is that there are some cases where a combination of uninformative expressions can yield likely clues.",
"For example, the self-introduction \"I am a newbie of this forum\" itself is basically uninformative for a question, but a combination with additional information such as \"I am a newbie of this forum.",
"Where can I change the password ...\" can be more informative than only the additional information \"... Where can I change the password since I forgot it after ...\" because the combination specifies the target site by the expression \"this forum\".",
"The unsupervised methods, SimTfidf, SimTfidf, SimEmb, and LexRank, which are widely used for summarization tasks, performed worse than the prefix method Prefix.",
"Although the change rates are higher than our method, the winning rates are lower.",
"In other words, they yielded many bad headlines.",
"These results suggest that supervised learning specialized to the target task would be required.",
"Comparing the important sentence extraction method ImpTfidf and the similarity-based summarization method SimTfidf, we found that SimTfidf performed better.",
"This implies that the content information of each question is useful for our headline generation task, as is the case with other summarization tasks.",
"The similarity-based method SimEmb with embeddings performed worse than our expectation.",
"The reason seems to be that it was difficult to obtain meaningful document embeddings from long questions.",
"The graph-based method LexRank had a similar performance to SimTfidf, because LexRank tends to select a candidate similar to the question when only one candidate was selected.",
"The supervised methods, SVM and SVR, performed relatively well compared to the unsupervised methods, but they did not outperform the strong simple baselines, Prefix and DictDel, nor our method MLRank.",
"These results support the appropriateness of our approach.",
"Related Work In this section, we briefly explain several related studies from two aspects: headline generation task and CQA data.",
"As discussed below, our work is the first attempt to address an extractive headline generation task for a CQA service based on learning to rank the substrings of a question.",
"After Rush et al.",
"(2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016; Chopra et al., 2016; Kiyono et al., 2017; Ayana et al., 2017; Raffel et al., 2017) .",
"However, all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines.",
"There have also been several classical studies based on nonneural approaches to headline generation (Woodsend et al., 2010; Alfonseca et al., 2013; Colmenares et al., 2015) , but they basically addressed sentence compression after extracting important linguistic units such as phrases.",
"In other words, their methods can still yield erroneous output, although they would be more controllable than neural models.",
"One exception is the work of Alotaiby (2011) , where fixed-sized substrings were considered for headline generation.",
"Although that approach is similar to ours, Alotaiby only considered an unsupervised method based on similarity to the original text (almost the same as SimTfidf in Section 5.2), in contrast to our proposal based on learning to rank.",
"This implies that Alotaiby's method will also not perform well for our task, as shown in Section 5.4.",
"There have been several studies on extractive summarization (Kobayashi et al., 2015; Yogatama et al., 2015) based on sentence embeddings, but they were basically developed for extracting multiple sentences, which means that these methods are almost the same as SimEmb in Section 5.2 for our purpose, i.e., extraction of the best candidate.",
"This also implies that they will not be suitable for our task.",
"Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016; Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly.",
"However, such paired data do not always exist for real applications, as in our task described in Section 1.",
"There have been many studies using CQA data, but most of them are different from our task, i.e., dealing with answering questions (Surdeanu et al., 2008; Celikyilmaz et al., 2009; Bhaskar, 2013; Nakov et al., 2017) , retrieving similar questions (Lei et al., 2016; Romeo et al., 2016; Nakov et al., 2017) , and generating questions (Heilman and Smith, 2010) .",
"Tamura et al.",
"(2005) focused on extracting a core sentence and identifying the question type as classification tasks for answering multiple-sentence questions.",
"Although their method is useful to retrieve important information, we cannot directly use it since our task requires shorter expressions for headlines than sentences.",
"In addition, they used a support vector machine as a classifier, which is almost the same as SVM in Section 5.2, and it is not expected to be suitable for our task, as shown in Section 5.4.",
"The work of Ishigaki et al.",
"(2017) is the most related one in that they summarized lengthy questions by using both abstractive and extractive approaches.",
"Their work is promising because our task is regarded as the construction of short summaries, but the training of their models requires a lot of paired data consisting of questions and their headlines, which means that their method cannot be used to our task.",
"Conclusion We proposed an extractive headline generation method based on learning to rank for CQA that extracts the most informative substring in each question as its headline.",
"We created a dataset for our task, where headline candidates in each question are ranked using crowdsourcing.",
"Our method outperformed several baselines, including a prefix-based method, which is widely used for cases where the display area is limited, such as the push notifications on smartphones.",
"The dataset created for our headline generation task will be made publicly available 8 .",
"Although our task is basically designed for extractive summarization, this dataset can also be used for abstractive summarization as a side information for training abstractive models.",
"In future work, we will investigate how effectively our method can perform in practical situations, e.g., push notifications.",
"In addition, we will consider how to improve the change rate of our method while keeping its winning rate and how to create a useful dataset even if removing the length limitation."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"4",
"5.1",
"5.2",
"5.3",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Negative Effect of Uninformative Headlines",
"Dataset Creation",
"Preparation of Headline Candidates",
"Crowdsourcing Task",
"Analysis of Crowdsourcing Results",
"Proposed Method",
"Basic Settings",
"Compared Methods",
"Evaluation Measures",
"Related Work",
"Conclusion"
]
}
|
GEM-SciDuet-train-50#paper-1080#slide-16
|
Conclusion
|
Addressed a snippet headline generation task for push notifications of CQA
Showed empirical evidence that snippet headlines are more effective than prefix headlines 2.4 times in average answer rate
Proposed extractive headline generation method based on learning to rank
Created dataset including headline candidates with "headline-ness" scores by crowdsourcing
Investigate effectiveness in practical situations on web service.
Make the dataset publicly available.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
Addressed a snippet headline generation task for push notifications of CQA
Showed empirical evidence that snippet headlines are more effective than prefix headlines 2.4 times in average answer rate
Proposed extractive headline generation method based on learning to rank
Created dataset including headline candidates with "headline-ness" scores by crowdsourcing
Investigate effectiveness in practical situations on web service.
Make the dataset publicly available.
Copyright (C) 2019 Yahoo Japan Corporation. All Rights Reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-0
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-0
|
Motivation
|
Whats the largest difference in 2010 2019
eh ae ee ey ad
DT ge Re Re a tg Dee ee nd
omy pa oees Os 8
PD De en ny are rig ee
Re come) com) me 8
ee eR Seen pede = Cn ded On Qax Om GB
bi dienellces io =) 5
eee ieee eee vs Sl a gy te ad ed : a i eel eee nl biel ae al ea teal Oe ome) com) ve) |] De ae Pe rey Cd ee Rh Sen ny a Ck ee ee at Cee eo ee DE ed Pate fom cea] vie 8 Cad Dee oh een - bAceell dieeeae2 Ce eT be Rid r=) Corey R. Lowandowsid @ 9 Closcecoesti un 14 ova eo Os 8 eo re td es
2019 Bloomberg Finance L.P. All rights reserved. Engineering
|
Whats the largest difference in 2010 2019
eh ae ee ey ad
DT ge Re Re a tg Dee ee nd
omy pa oees Os 8
PD De en ny are rig ee
Re come) com) me 8
ee eR Seen pede = Cn ded On Qax Om GB
bi dienellces io =) 5
eee ieee eee vs Sl a gy te ad ed : a i eel eee nl biel ae al ea teal Oe ome) com) ve) |] De ae Pe rey Cd ee Rh Sen ny a Ck ee ee at Cee eo ee DE ed Pate fom cea] vie 8 Cad Dee oh een - bAceell dieeeae2 Ce eT be Rid r=) Corey R. Lowandowsid @ 9 Closcecoesti un 14 ova eo Os 8 eo re td es
2019 Bloomberg Finance L.P. All rights reserved. Engineering
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-1
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-1
|
Applications
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-2
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-2
|
Data Task Definition Text Task
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-3
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-3
|
Data Task Definition Image Task
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-4
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-4
|
Data Annotation
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-5
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-5
|
Data Collection
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-6
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-6
|
Data Distribution
|
Image does not add & Text not represented
Image does not add & Some text represented
Image adds & Text not represented
2019 Bloomberg Finance L.P. All rights reserved.
|
Image does not add & Text not represented
Image does not add & Some text represented
Image adds & Text not represented
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-7
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-7
|
Analysis Text Task
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-8
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-8
|
Analysis Image Task
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-9
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-9
|
Prediction Methods
|
2019 Bloomberg Finance L.P. All rights reserved.
|
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-10
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-10
|
Prediction Baseline Methods
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-11
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-11
|
Prediction Text based Methods
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-12
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-12
|
Prediction Image based Methods
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-13
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-13
|
Prediction Joint Text Image Methods
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
Image Task (Image adds to meaning) Text Task (Text is represented) Image + Text Task
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-51#paper-1089#slide-14
|
1089
|
Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts
|
Text in social media posts is frequently accompanied by images in order to provide content, supply context, or to express feelings. This paper studies how the meaning of the entire tweet is composed through the relationship between its textual content and its image. We build and release a data set of image tweets annotated with four classes which express whether the text or the image provides additional information to the other modality. We show that by combining the text and image information, we can build a machine learning approach that accurately distinguishes between the relationship types. Further, we derive insights into how these relationships are materialized through text and image content analysis and how they are impacted by user demographic traits. These methods can be used in several downstream applications including pre-training image tagging models, collecting distantly supervised data for image captioning, and can be directly used in end-user applications to optimize screen estate.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200
],
"paper_content_text": [
"Introduction Social media sites have traditionally been centered around publishing textual content.",
"Recently, posting images on social media has become a very popular way of expressing content and feelings especially due to the wide availability of mobile devices and connectivity.",
"Images are currently present in a significant fraction of tweets and tweets with images get double the engagement of those without (Buffer, 2016) .",
"Thus, in addition to text, images have become key components of tweets.",
"However, little is known about how textual content is related to the images with which they appear.",
"For example, concepts or feelings mentioned in text could be illustrated or strengthened by images, text can point to the content of an image or can just provide commentary on the image content.",
"Formalizing and understanding the relationship between the two modalities -text and images -is useful in several areas: a) for NLP and computer vision research, where image and text data from tweets are used to developing data sets and methods for image captioning (Mitchell et al., 2012) or object recognition (Mahajan et al., 2018) ; b) for social scientists and psychologists trying to understand social media use; c) in browsers or apps where images that may not contain additional content in addition to the text would be replaced by a placeholder and displayed if the end-user desires to in order to op-timize screen space (see Figure 2 ).",
"Figure 1 illustrates four different ways in which the text and image of the same tweet can be related: • Figures 1(a,b) show how the image can add to the semantics of the tweet, by either providing more information than the text (Figure 1a ) or by providing the context for understanding the text (Figure 1b ); • In Figures 1(c,d) , the image only illustrates what is expressed through text, without providing any additional information.",
"Hence, in both of these cases, the text alone is sufficient to understanding the tweet's key message; • Figures 1(a,c) show examples of tweets where there is a semantic overlap between the content of the text and image: bike and sign in Figure 1a and tacos in Figure 1c ; • In Figures 1(b,d) , the textual content is not represented in the image, with the text being either a comment on the image's content (Figure 1b) or the image illustrating a feeling related to the text's content.",
"In this paper, we present a comprehensive analysis that focuses on the types of relationships between the text and image in a tweet.",
"Our contributions include: • Defining the types of relationships between the text and the image of a social media post; • Building a data set of tweets annotated with text -image relationship type; 1 • Machine learning methods that use both text and image content to predict the relationship between the two modalities; • An analysis into the author's demographic traits that are related to usage preference of textimage relationship types; • An analysis of the textual features which characterize each relationship type.",
"Related Work Task.",
"The relationship between a text and its associated image was researched in a few prior studies.",
"For general web pages, Marsh and Domas White (2003) propose a taxonomy of 49 relationship grouped in three major categories based on how similar is the image to the text ranging from little relation to going beyond the text, which forms the basis of one of our relationship dimen-sions.",
"Martinec and Salway (2005) aim to categorize text-image relationships in scientific articles from two perspectives: the relative importance of one modality compared to the other and the logico-semantic overlap.",
"Alikhani and Stone (2018) argue that understanding multimodal textimage presentation requires studying the coherence relations that organize the content.",
"Even when a single relationship is used, such as captioning, it can be expressed in multiple forms such as telic, atelic or stative .",
"Wang et al.",
"(2014) use the intuition that text and images from microposts can be associated or not or depend on one another and use this intuition in a topic model that learns topics and image tags jointly.",
"Jas and Parikh (2015) study the concept of image specificity through how similar to each other are multiple descriptions of that image.",
"However, none of these studies propose any predictive methods for text-image relationship types.",
"annotate and train models on a recipe data set (Yagcioglu et al., 2018) for the relationships between instructional text and images around the following dimensions: temporal, logical and incidental detail.",
"Chen et al.",
"(2013) study text-image relationships using social media data focusing on the distinction between images that are overall visually relevant or non-relevant to the textual content.",
"They build models using the text and image content that predict the relationship type (Chen et al., 2015) .",
"We build on this research and define an annotation scheme that focuses on each of the two modalities separately and look at both their semantic overlap and contribution to the meaning of the whole tweet.",
"Applications.",
"Several applications require to be able to automatically predict the semantic textimage relationship in the data.",
"Models for automatically generating image descriptions (Feng and Lapata, 2010; Ordonez et al., 2011; Mitchell et al., 2012; Vinyals et al., 2015; Lu et al., 2017) or predicting tags (Mahajan et al., 2018) are built using large training data sets of noisy imagetext pairs from sources such as tweets.",
"Multimodal named entity disambiguation leverages visual context vectors from social media images to aid named entity disambiguation (Moon et al., 2018) .",
"Multimodal topic labeling focuses on generating candidate labels (text or images) for a given topic and ranks them by relevance (Sorodoc et al., 2017) .",
"Several resources of images paired with descriptive captions are available, which can be used to build similarity metrics and joint semantic spaces for text and images (Young et al., 2014) .",
"However, all these assume that the text an image represent similar concepts which, as we show in this paper, is not true in Twitter.",
"Being able to classify this relationship can be useful for all above-mentioned applications.",
"Categorizing Text-Image Relationships We define the types of semantic relationships that can exist between the content of the text and the image by splitting them into two tasks for simplicity.",
"The first task is centered on the role of the text to the tweet's semantics, while the second focuses on the image's role.",
"The first task -referred to as the text task in the rest of the paper -focuses on identifying if there is semantic overlap between the context of the text and the image.",
"This task is the defined using the following guidelines: 1.",
"Some or all of the content words in the text are represented in the image (Text is represented) 2.",
"None of the content words in the text are represented in the image (Text is not represented): • None of the content words are represented in the image, or • The text is only a comment about the content of the image, or • The text expresses a feeling or emotion about the content of the image, or • The text only makes a reference to something shown in the image, or • The text is unrelated to the image Examples for this task can be seen in Figure 1 by comparing Figures 1(a ,c) (Text is represented) with Figures 1(b,d) (Text is not represented).",
"The second task -referred to as the image task in the rest of the paper -focuses on the role of the image to the semantics of the tweet and aims to identify if the image's content contributes with additional information to the meaning of the tweet beyond the text, as judged by an independent third party.",
"This task is defined and annotated using the following guidelines: 1.",
"Image has additional content that represents the meaning of the text and the image (Image adds): • Image contains other text that adds additional meaning to the text, or • Image depicts something that adds information to the text or • Image contains other entities that are referenced by the text.",
"2.",
"Image does not add additional content that represents the meaning of text+image (Image does not add).",
"Examples for the image task can be seen in Combining the labels of the two binary tasks described above gives rise to four types of text-image relationships (Image+Text Task).",
"All of the four relationship types are exemplified in Figure 1 .",
"Data Set To study the relationship between the text and image in the same social media post, we define a new annotation schema and collect a new annotated corpus.",
"To the best of our knowledge, no such corpus exists in prior research.",
"Data Sampling We use Twitter as the source of our data, as this source contains a high level of expression of thoughts, opinions and emotions (Java et al., 2007; Kouloumpis et al., 2011) .",
"It represents a platform for observing written interactions and conversations between users (Ritter et al., 2010) .",
"The tweets were deliberately randomly sampled tweets from a list of users for which several of their socio-demographic traits are known, introduced in past research .",
"This will enable us to explore if the frequency of posting tweets with a certain text-image relationship is different across socio-demographic groups.",
"We downloaded as many tweets as we could from these users using the Twitter API (up to 3,200 tweets/user per API limits).",
"We decided to annotate only tweets from within the same time range (2016) in order to reduce the influence of potential platform usage changes with time.",
"We filter out tweets that are not written in English using the langid.py tool (Lui and Baldwin, 2012) .",
"In total, 2,263 users (out of the initial 4,132) have posted tweets with at least one image in the year 2016 and were included in our analysis.",
"Our final data set contains 4,471 tweets.",
"Demographic Variables The Twitter users from the data set we sampled have self-reported the following demographic variables through a survey: gender, age, education level and annual income.",
"All users solicited for data collection were from the United States in order to limit cultural variation.",
"• Gender was considered binary 2 and coded with Female -1 and Male -0.",
"All other variables are treated as ordinal variables.",
"• Age is represented as a integer value in the 13-90 year old interval.",
"• Education level is coded as an ordinal variable with 6 values representing the highest degree obtained, with the lowest being 'No high school degree' (coded as 1) and the highest being 'Advanced Degree (e.g., PhD)' (coded as 6).",
"• Income level is coded as on ordinal variable with 8 values representing the annual income of the person, ranging from '< $20,000' to '> $200,000').",
"For a full description of the user recruitment and quality control processes, we refer the interested reader to .",
"Annotation We have collected annotations for text-image pairs from 4,471 tweets using the Figure Eight platform (formerly CrowdFlower).",
"We annotate all tweets containing both text and image using two independent annotation tasks in order to simplify the task and not to prime annotators use the outcome of one task as a indicator for the outcome of the other.",
"For quality control, 10% of annotations were test questions annotated by the authors.",
"Annotators had to maintain a minimum accuracy on test questions of 85% for the image task and 75% for the text task for their annotations to be valid.",
"Inter-annotator agreement is measured using Krippendorf's Alpha.",
"The overall Krippendorfs Alpha is 0.71 for the image task, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008) .",
"We collect 3 judgments and use majority vote to obtain the final label to further remove noise.",
"For the text task, we collected and aggregated 5 judgments as the Krippendorf's Alpha is 0.46, which is considered moderate agreement (Artstein and Poesio, 2008) .",
"The latter task was more difficult due to requiring specific world knowledge (e.g.",
"a singer mentioned in a text also present in an image) or contained information encoded in hashtags or usernames which the annotators sometimes overlooked.",
"The aggregated judgments for each task were combined to obtain the four class labels.",
"Methods Our goal is to develop methods that are capable of automatically classifying the text-image relationship in tweets.",
"We experiment with several methods which use information of four different types: demographics of the user posting the tweet, metadata from the tweet, the text of the tweet or the image of the tweet; plus a combination of them.",
"The methods we use are described in this section.",
"User Demographics User demographic features are the survey-based demographic information we have available for all users that posted the annotated tweets.",
"The use of these traits is based on the intuition that different demographic groups have different posting preferences (Pennacchiotti and Popescu, 2011; .",
"We use this approach for comparison reasons only, as in practical use cases we would normally not have access to the author's demographic traits.",
"We code the gender, age, education level and income level of the user as features and use them in a logistic regression classifier to classify the textimage relationship.",
"Tweet Metadata We experiment with using the tweet metadata as features.",
"These code if a tweet is a reply, tweet, like or neither.",
"We also add as features the tweet like count, the number of followers, friends and posts of the post's author and include them all in a logistic regression classifier.",
"These features are all available at tweet publishing time and we build a model using them to establish a more solid baseline for content based approaches.",
"Text-based Methods We use the textual content of the tweet alone to build models for predicting the text-image relationship.",
"We expect that certain textual cues will be specific to relationships even without considering the image content.",
"For example, tweets ending in an ellipsis or short comments will likely be predictive of the text not being represented in the image.",
"Surface Features.",
"We first use a range of surface features which capture more of the shallow stylistic content of the tweet.",
"We extract number of tokens, uppercase tokens, exclamations, questions, ellipsis, hashtags, @ mentions, quotes and URLs from the tweet and use them as features in a logistic regression classifier.",
"Bag of Words.",
"The most common approach for building a text-based model is using bag-ofwords features.",
"Here, we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization (Zou and Hastie, 2005) .",
"LSTM.",
"Finally, based on recent results in text classification, we also experiment with a neural network approach which uses a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network.",
"The LSTM network processes the tweet sequentially, where each word is represented by its embedding (E = 200) followed by a dense hidden layer (D = 64) and by a a ReLU activation function and dropout (0.4) The model is trained by minimizing cross entropy using the Adam optimizer (Kingma and Ba, 2014).",
"The network uses in-domain Twitter GloVe embeddings pre-trained on 2 billion tweets (Pennington et al., 2014) .",
"Image-based Methods We use the content of the tweet image alone to build models for predicting the text-image relationship.",
"Similar to text, we expect that certain image content will be predictive of text-image relationships even without considering the text content.",
"For example, images of people may be more likely to have in the text the names of those persons.",
"To analyze image content, we make use of large pre-trained neural networks for the task of object recognition on the ImageNet data set.",
"ImageNet (Deng et al., 2009 ) is a visual database developed for object recognition research and consists of 1000 object types.",
"In particular, we use the popular pre-trained InceptionNet model (Szegedy et al., 2015) , which achieved the best performance at the ImageNet Large Scale Visual Recognition Challenge 2014 to build the following two imagebased models.",
"ImageNet Classes.",
"First, we represent each image in a tweet with the probability distribution over the 1,000 ImageNet classes obtained from Inception-Net.",
"Then, we pass those features to a logistic regression classifier which is trained on our task.",
"In this setup, the network parameters remain fixed, while only the ImageNet class weights are learned in the logistic regression classifier.",
"Tuned InceptionNet.",
"Additionally, we tailored the InceptionNet network to directly predict our tasks by using the multinomial logistic loss with softmax as the final layer for our task to replace the 1,000 ImageNet classes.",
"Then, we loaded the pretrained network from (Szegedy et al., 2015) and fine-tuned the final fully-connected layer with the modified loss layers.",
"We perform this in order to directly predict our task, while also overcoming the necessity of re-extracting the entire model weights from our restricted set of images.",
"The two approaches to classification using image content based on pre-trained model on Im-ageNet have been used successfully in past research (Cinar et al., 2015) .",
"Joint Text-Image Methods Finally, we combine the textual and image information in a single model to classify the text-image relationship type, as we expect both types of content and their interaction to be useful to the task.",
"Ensemble.",
"A simple method for combining the information from both modalities is to build an ensemble classifier.",
"This is done with a logistic regression model with two features: the Bag of Words text model's predicted class probability and the Tuned InceptionNet model's predicted class probability.",
"The parameters of the model are tuned by cross validation on the training data and similar splits as the individual models.",
"LSTM + InceptionNet.",
"We also build a joint approach by concatenating the features from the final layers of our LSTM and InceptionNet models and passing them through a fully-connected (FC) feed forward neural network with one hidden layer (64 nodes).",
"The final output is our text-image relationship type.",
"We use the Adam optimizer to fine tune this network.",
"The LSTM model has the same parameters as in the text-only approach, while the InceptionNet model is initialized with the pre-trained model on the ImageNet data set.",
"Predicting Text-Image Relationship We split our data into a 80% train (3,576 tweets) and 20% test (895 tweets) stratified sample for all of our experiments.",
"Parameters were tuned using 10-fold cross-validation with the training set, and results are reported on the test set.",
"Table 1 presents the weighted F1-scores for the text task, the image task and the image+text task with all the methods described in Section 5.",
"The weighted F1 score is the weighted average of the class-level F1 scores, where the weight is the number of items in each class.",
"The majority baseline always predicts the most frequent class in each task, namely: Image does not add for the image task, Text is not represented for the text task and Image does not add & Text is not represented for the Image + Text task.",
"The models using user demographics and tweet metadata show minor improvements over the majority class baseline for both tasks.",
"When the two tasks are combined, both feature types offer only a slight increase over the baseline.",
"This shows that user factors mildly impact the frequency with which relationship types are used, which will be explored further in the analysis section.",
"The models that use tweet text as features show consistent improvements over the baseline for all three tasks.",
"The two models that use the tweet's topical content (Bag of Words and LSTM) obtain higher predictive performance over the surface features.",
"Both content based models obtain relatively similar performance, with the LSTM performing better on the image task.",
"The models which use information extracted from the image alone also consistently outperform the baseline on all three tasks.",
"Re-tuning the neural network performs substantially better than building a model directly from the ImageNet classes on the image task and narrowly outperforms the other method on the text task.",
"This is somewhat expected, as the retuning is performed on this domain specific task.",
"When comparing text and image based models across tasks, we observe that using image features obtains substantially better performance on the image task, while the text models obtain bet-ter performance on the text task.",
"This is somewhat natural, as the focus of each annotation task is on one modality and methods relying on content from that modality are more predictive alone as to what ultimately represents the text-image relationship type.",
"Our naive ensemble approach does not yield substantially better results than the best performing methods using a single modality.",
"However, by jointly modelling both modalities, we are able to obtain improvements -especially on the image task.",
"This shows that both types of information and their interaction are important to this task.",
"Methods that exploit more heavily the interaction and semantic similarity between the text and the image are left for future work.",
"We also observe that the predictive methods we described are better at classifying the image task.",
"The analysis section below will allow us to uncover more about what type of content characterizes each relationship type.",
"Analysis In this section, we aim to gain a better understanding of the type of content specific of the four textimage relationship types and about user type preferences in their usage.",
"User Analysis Socio-demographic traits of the authors of posts are known to be correlated with several social media behaviors including text (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Schwartz et al., 2013; Volkova et al., 2014; Lampos et al., 2014; Preoţiuc-Pietro et al., 2015a ,b, 2016 and images (Alowibdi et al., 2013; You et al., 2014; Farseev et al., 2015; Skowron et al., 2016; Liu et al., 2016; Guntuku et al., 2017; Samani et al., 2018; Guntuku et al., 2019) .",
"We hypothesize that socio-demographic traits also play a role in the types of text-image relationships employed on Twitter.",
"To measure this, we use partial Pearson correlation where the dependent variables are one of four socio-demographic traits described in Section 4.2.",
"The independent variables indicate the average times with which the user employed a certain relationship type.",
"We code this using six different variables: two representing the two broader tasks -the percentage of tweets where image adds information and the percentage of tweets where the text is represented in the image -and four encoding each combination between the two tasks.",
"In addition, for all analyses we consider gender and age as basic human traits and control for data skew by introducing both variables as controls in partial correlation, as done in prior work (Schwartz et al., 2013; Holgate et al., 2018) .",
"When studying age and gender, we only use the other trait as the control.",
"Because we are running several statistical tests at once (24) without predefined hypotheses, we use Bonferroni correction to counteract the problem of multiple comparisons.",
"The results are presented in Table 2 .",
"We observe that age is the only user demographic trait that is significantly correlated to text-image relationship preference after controlling for multiple comparisons and other demographic traits.",
"The text-image relationship where the text is represented in the image, at least partially, is positively correlated with age (r = 0.117).",
"Further analyzing the four individual text-image relationship types reveals that older users especially prefer tweets where there is a semantic overlap between the concepts present in the text and the image, but the image contributes with additional information to the meaning of the tweet.",
"This is arguably the most conventional usage of images, where they illustrate the text and provide more details than the text could.",
"Younger users prefer most tweets where the image adds information to the meaning of the tweet, but this has no semantic overlap with the text.",
"These are usually tweets where the text represents merely a comment or a feeling expressed with the image providing the context.",
"This represents a more image-centric approach to the meaning of the tweet that is specific to younger users.",
"These correlations are controlled for gender.",
"Education was also correlated with images where the text was represented in the image (r = 0.076, p < .01, Bonferroni corrected), but this correlation did not meet the significance criteria when controlled for age to which education is moderately correlated (r = 0.302).",
"This demonstrates the importance of controlling for such factors in this type of analysis.",
"No effects were found with respect to gender or income.",
"Table 2 : Pearson correlation between user demographic traits and usage of the different text-image relationship types.",
"All correlations in bold are significant at p < .01, two-tailed t-test, Bonferroni corrected for multiple comparisons.",
"Results for gender are controlled for age and vice versa.",
"Results for education and income are controlled for age and gender.",
"Tweet Metadata Analysis We adapt a similar approach to uncover potential relationships between the text-image relationship expressed in the tweet and tweet metadata features described in Section 5.2.",
"However, after controlling for multiple comparisons, we are left with no significant correlations at p < 0.01 level.",
"Hence, we refrain from presenting and discussing any results using this feature group as significant.",
"Text Analysis Finally, we aim to identify the text and image features that characterize the four types of text-image relationship.",
"We use univariate Pearson correlation where the independent variable is each feature's normalized value in a tweet and the dependent variables are two binary indicators for the text and image tasks respectively.",
"When performed using text features, this technique was coined Differential Language Analysis (Schwartz et al., 2013 (Schwartz et al., , 2017 .",
"The results when using unigrams as features are presented in Figure 3 , 4 and 5.",
"Results for the image task (Figure 3) show that the image adds to the meaning of the tweet if words such as this, it, why, needs or want are used.",
"These words can appear in texts with the role of referencing or pointing to an entity which is only present in the image.",
"Conversely, the image does not add to the meaning of the tweet when words indicative of objects that are also described in the image are present (cat, baby, eyes or face), thus resulting in the image not adding to the meaning of the tweet.",
"A special case are tweets with birthday wishes, where a person is mentioned in text and also displayed in an image.",
"Finally, the tbt keyword and hashtag is a popular social media trend where users post nostalgic pictures of their past accompanied by their textual description.",
"The comparison between the two outcomes of the text task is presented in Figure 4 .",
"When the text and image semantically overlap, we observe words indicative of actions (i've), possessions (your) or qualitative statements (congrats, loved, excited, tried), usually about objects or persons also present in the image.",
"We also observe a few nouns (cats, hillary) indicating frequent content that is also depicted in images (NB: the tweets were collected in 2016 when the U.S. presiden-tial elections took place).",
"Analyzing this outcome jointly with the text task, we uncover a prominent theme consisting of words describing first person actions (congrats, thank, i've, saw, tell) present when the image provides facets not covered by text (Figure 5d ).",
"Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a) .",
"In contrast, the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.",
"References to self, objects and personal states (i, me) and feelings (miss) are also expressed in text about items or things not appearing the image from the same tweet.",
"Further exploring this result though the image task outcome, we see that the latter category of feelings about persons of objects ( Figure 5a ) -miss, happy, lit, like) are specific of when the image does not add additional information.",
"Through manual inspection of these images, they often display a meme (as in Figure 1d ) or unrelated expressions to the text's content.",
"The image adds information when the text is not represented (Figure 5c ) if the latter includes personal feelings, (me, i, i'm, want), comments (lol, lmao) and references (this, it), usually related to the image content as identified through an analysis of the data.",
"Conclusions We defined and analyzed quantitatively and qualitatively the semantic relationships between the text and the image of the same tweet using a novel annotated data set.",
"The frequency of use is influenced by the age of the poster, with younger users employing images with a more prominent role in the tweet, rather than just being redundant to the text or as a means of illustrating it.",
"We studied the correlation between the content in the text and relation with the image, highlighting a differentiation between relationship types, even if only using the text of the tweet alone.",
"We developed models that use both text and image features to classify the text-image relationship, with especially high performance (F1 = 0.81) in identifying if the image is redundant, which is immediately useful for downstream applications that maximize screen es-tate for users.",
"Future work will look deeper into using the similarity between the content of the text and image (Leong and Mihalcea, 2011), as the text task results showed room for improvements.",
"We envision that our data, task and classifiers will be useful as a preprocessing step in collecting data for training large scale models for image captioning (Feng and Lapata, 2010) or tagging (Mahajan et al., 2018) or for improving recommendations (Chen et al., 2016) by filtering out tweets where the text and image have no semantic overlap or can enable new tasks such as identifying tweets that contain creative descriptions for images."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"5.5",
"6",
"7",
"7.1",
"7.2",
"7.3",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Categorizing Text-Image Relationships",
"Data Set",
"Data Sampling",
"Demographic Variables",
"Annotation",
"Methods",
"User Demographics",
"Tweet Metadata",
"Text-based Methods",
"Image-based Methods",
"Joint Text-Image Methods",
"Predicting Text-Image Relationship",
"Analysis",
"User Analysis",
"Tweet Metadata Analysis",
"Text Analysis",
"Conclusions"
]
}
|
GEM-SciDuet-train-51#paper-1089#slide-14
|
Takeaways
|
Text does not always describe the image
The image does not always illustrate text
Best results on each subtask are obtained by methods using different modalities (text or
2019 Bloomberg Finance L.P. All rights reserved.
|
Text does not always describe the image
The image does not always illustrate text
Best results on each subtask are obtained by methods using different modalities (text or
2019 Bloomberg Finance L.P. All rights reserved.
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-0
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-0
|
Gender Prediction
|
The task of predicting gender based only on text.
SVM with word/char n-grams performs best!
I Winner PAN 2017 shared task on author profiling:
I Characters: 3-6 grams
|
The task of predicting gender based only on text.
SVM with word/char n-grams performs best!
I Winner PAN 2017 shared task on author profiling:
I Characters: 3-6 grams
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-2
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-2
|
Cross lingual Gender Prediction
|
I Train a model on source language(s) and evaluate on
I Dataset: TwiSty corpus (Verhoeven et al., 2016) +
FR EN NL PT ES Test Language
USER Jaaa moeten we zeker doen
|
I Train a model on source language(s) and evaluate on
I Dataset: TwiSty corpus (Verhoeven et al., 2016) +
FR EN NL PT ES Test Language
USER Jaaa moeten we zeker doen
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-3
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-3
|
Bleaching Text
|
Original Massacred a bag of Doritos for lunch!
PunctC w w w w w w w!
PunctA w w w w w w w w w w w wp jjjj
Shape w w ull w w w w w w w w l ll ll ull w wp ll llx jjjj xx
Vowels w w ull w w w w l ll w w w w ll ull w w ll wp llx cvccvccvc v cvc vc cvcvcvc cvc cvccco
I Replace usernames and URLs
I Use concatenation of the bleached representations
I 5-grams perform best
FR EN NL PT ES Test Language
Trained on all other languages:
EN NL FR PT ES Test Language
W W W W W USER E W W W
W W W W ?
E W W W W
W W, W W W? LL LL LL LL LX
PP W W W W
LL LL LL LL LUU
W W W W JJJ
W W W W &W;W
J W W W W
|
Original Massacred a bag of Doritos for lunch!
PunctC w w w w w w w!
PunctA w w w w w w w w w w w wp jjjj
Shape w w ull w w w w w w w w l ll ll ull w wp ll llx jjjj xx
Vowels w w ull w w w w l ll w w w w ll ull w w ll wp llx cvccvccvc v cvc vc cvcvcvc cvc cvccco
I Replace usernames and URLs
I Use concatenation of the bleached representations
I 5-grams perform best
FR EN NL PT ES Test Language
Trained on all other languages:
EN NL FR PT ES Test Language
W W W W W USER E W W W
W W W W ?
E W W W W
W W, W W W? LL LL LL LL LX
PP W W W W
LL LL LL LL LUU
W W W W JJJ
W W W W &W;W
J W W W W
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-4
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-4
|
Human Experiments
|
I Are humans able to predict gender based only on text for
I 20 tweets per user (instead of 200)
I 6 annotators per language pair
I Each annotating 100 users
I 200 users per language pair, so 3 predictions per user
Auser has posted the following tweets:
pelo amor de deus cai na realidade URL
a versao de REALITi do album e tao ruim ne eu to ate meio assim
meu rosto tinha tdo pra ser ok mas nao eu tive que nascer com esse nariz horroroso e esses olhos cagados
eu nunca ouvi nada tao lindo URL
o mundo precisa ouvir isso URL
USER GENTE?2???77??7? eu apenas conciliei elas com a situaao atual da minha vida e jd to todo em choque aqui pq to bateu
meu deus eu desci o nivel da timeline dum jeito q a gente ja se encontra no pre sal moral
quando a pessoa e tao mediocre que te chama de nerd debochando pq ve disse que gosta de ler
USER bom.....eu num sei de nda
USER eu to com 0 olho chei de agua sua mae eh tao linda vwv eu definitivamente nao aguento mais URL
Rindo Muito De Meu Proprio Tweet
USER USER sempre contribuindo para a arte de minhas amigas
que saudade de camiliquia USER as arvores da minha casa tinham 70 ano!
USER o suprassumo da diferentona ANAO pagueia lingua, pin e a terceira melhor musica do album, que musica maravilhosa USER USER USER qual a intencdo em cmpartilhar fotos explicitas de criancas sendo abusadas? aminha mae reclama de absolutamente tudo ela nao para de reclamar 1 segundo, ela nunca ta de bom humor, ela nunca acha USER melissa do ceu como assim explica cortaram >todas< por causa dos canos do vizinho
eee emer reer errr evr ee
Do you think that the poster of these tweets is male or female? (required) Male Female @ Please use your intuition.
NL NL NL PT FR NL
(note that the classifier had acces to 200 tweets)
|
I Are humans able to predict gender based only on text for
I 20 tweets per user (instead of 200)
I 6 annotators per language pair
I Each annotating 100 users
I 200 users per language pair, so 3 predictions per user
Auser has posted the following tweets:
pelo amor de deus cai na realidade URL
a versao de REALITi do album e tao ruim ne eu to ate meio assim
meu rosto tinha tdo pra ser ok mas nao eu tive que nascer com esse nariz horroroso e esses olhos cagados
eu nunca ouvi nada tao lindo URL
o mundo precisa ouvir isso URL
USER GENTE?2???77??7? eu apenas conciliei elas com a situaao atual da minha vida e jd to todo em choque aqui pq to bateu
meu deus eu desci o nivel da timeline dum jeito q a gente ja se encontra no pre sal moral
quando a pessoa e tao mediocre que te chama de nerd debochando pq ve disse que gosta de ler
USER bom.....eu num sei de nda
USER eu to com 0 olho chei de agua sua mae eh tao linda vwv eu definitivamente nao aguento mais URL
Rindo Muito De Meu Proprio Tweet
USER USER sempre contribuindo para a arte de minhas amigas
que saudade de camiliquia USER as arvores da minha casa tinham 70 ano!
USER o suprassumo da diferentona ANAO pagueia lingua, pin e a terceira melhor musica do album, que musica maravilhosa USER USER USER qual a intencdo em cmpartilhar fotos explicitas de criancas sendo abusadas? aminha mae reclama de absolutamente tudo ela nao para de reclamar 1 segundo, ela nunca ta de bom humor, ela nunca acha USER melissa do ceu como assim explica cortaram >todas< por causa dos canos do vizinho
eee emer reer errr evr ee
Do you think that the poster of these tweets is male or female? (required) Male Female @ Please use your intuition.
NL NL NL PT FR NL
(note that the classifier had acces to 200 tweets)
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-5
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-5
|
Conclusions
|
I Lexical models break down when used cross-language
I Bleaching text improves cross-lingual performance
I Humans performance is on par with our bleached
|
I Lexical models break down when used cross-language
I Bleaching text improves cross-lingual performance
I Humans performance is on par with our bleached
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-6
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-6
|
Cross lingual Embeddings
|
EN NL FR PT ES Test Language
|
EN NL FR PT ES Test Language
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-7
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-7
|
Lexicalized Cross language
|
Test EN NL FR PT ES
|
Test EN NL FR PT ES
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-8
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-8
|
In language performance
|
EN NL FR PT ES Test Language
|
EN NL FR PT ES Test Language
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-9
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-9
|
Bleached Lexicalized
|
EN NL FR PT ES
|
EN NL FR PT ES
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-10
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-10
|
Unigrams vs fivegrams
|
EN NL FR PT ES
|
EN NL FR PT ES
|
[] |
GEM-SciDuet-train-52#paper-1090#slide-12
|
1090
|
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction
|
Gender prediction has typically focused on lexical and social network features, yielding good performance, but making systems highly language-, topic-, and platformdependent. Cross-lingual embeddings circumvent some of these limitations, but capture gender-specific style less. We propose an alternative: bleaching text, i.e., transforming lexical strings into more abstract features. This study provides evidence that such features allow for better transfer across languages. Moreover, we present a first study on the ability of humans to perform cross-lingual gender prediction. We find that human predictive power proves similar to that of our bleached models, and both perform better than lexical models.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116
],
"paper_content_text": [
"Introduction Author profiling is the task of discovering latent user attributes disclosed through text, such as gender, age, personality, income, location and occupation (Rao et al., 2010; Burger et al., 2011; Feng et al., 2012; Jurgens, 2013; Bamman et al., 2014; Plank and Hovy, 2015; Flekova et al., 2016) .",
"It is of interest to several applications including personalized machine translation, forensics, and marketing (Mirkin et al., 2015; Rangel et al., 2015) .",
"Early approaches to gender prediction (Koppel et al., 2002; Schler et al., 2006, e.g.)",
"are inspired by pioneering work on authorship attribution (Mosteller and Wallace, 1964) .",
"Such stylometric models typically rely on carefully handselected sets of content-independent features to capture style beyond topic.",
"Recently, open vocabulary approaches (Schwartz et al., 2013) , where the entire linguistic production of an author is used, yielded substantial performance gains in on-line user-attribute prediction (Nguyen et al., 2014; Preoţiuc-Pietro et al., 2015; Emmery et al., 2017) .",
"Indeed, the best performing gender prediction models exploit chiefly lexical information (Rangel et al., 2017; Basile et al., 2017) .",
"Relying heavily on the lexicon though has its limitations, as it results in models with limited portability.",
"Moreover, performance might be overly optimistic due to topic bias (Sarawgi et al., 2011) .",
"Recent work on cross-lingual author profiling has proposed the use of solely language-independent features (Ljubešić et al., 2017) , e.g., specific textual elements (percentage of emojis, URLs, etc) and users' meta-data/network (number of followers, etc), but this information is not always available.",
"We propose a novel approach where the actual text is still used, but bleached out and transformed into more abstract, and potentially better transferable features.",
"One could view this as a method in between the open vocabulary strategy and the stylometric approach.",
"It has the advantage of fading out content in favor of more shallow patterns still based on the original text, without introducing additional processing such as part-of-speech tagging.",
"In particular, we investigate to what extent gender prediction can rely on generic non-lexical features (RQ1), and how predictive such models are when transferred to other languages (RQ2).",
"We also glean insights from human judgments, and investigate how well people can perform cross-lingual gender prediction (RQ3).",
"We focus on gender prediction for Twitter, motivated by data availability.",
"Contributions In this work i) we are the first to study cross-lingual gender prediction without relying on users' meta-data; ii) we propose a novel simple abstract feature representation which is surprisingly effective; and iii) we gauge human ability to perform cross-lingual gender detection, an angle of analysis which has not been studied thus far.",
"Profiling with Abstract Features Can we recover the gender of an author from bleached text, i.e., transformed text were the raw lexical strings are converted into abstract features?",
"We investigate this question by building a series of predictive models to infer the gender of a Twitter user, in absence of additional user-specific metadata.",
"Our approach can be seen as taking advantage of elements from a data-driven open-vocabulary approach, while trying to capture gender-specific style in text beyond topic.",
"To represent utterances in a more language agnostic way, we propose to simply transform the text into alternative textual representations, which deviate from the lexical form to allow for abstraction.",
"We propose the following transformations, exemplified in Table 1 .",
"They are mostly motivated by intuition and inspired by prior work, like the use of shape features from NER and parsing (Petrov and Klein, 2007; Schnabel and Schütze, 2014; Limsopatham and Collier, 2016) : • Frequency Each word is presented as its binned frequency in the training data; bins are sized by orders of magnitude.",
"• Length Number of characters (prefixed by 0 to avoid collision with the next transformation).",
"• PunctC Merges all consecutive alphanumeric characters to one 'W' and leaves all other characters as they are (C for conservative).",
"• PunctA Generalization of PunctC (A for aggressive), converting different types of punctuation to classes: emoticons 1 to 'E' and emojis 2 to 'J', other punctuation to 'P'.",
"• Shape Transforms uppercase characters to 'U', lowercase characters to 'L', digits to 'D' and all other characters to 'X'.",
"Repetitions of transformed characters are condensed to a maximum of 2 for greater generalization.",
"• Vowel-Consonant To approximate vowels, while being able to generalize over (Indo-European) languages, we convert any of the 'aeiou' characters to 'V', other alphabetic character to 'C', and all other characters to 'O'.",
"• AllAbs A combination (concatenation) of all previously described features.",
"Experiments In order to test whether abstract features are effective and transfer across languages, we set up experiments for gender prediction comparing lexicalized and bleached models for both in-and cross-language experiments.",
"We compare them to a model using multilingual embeddings (Ruder, 2017) .",
"Finally, we elicit human judgments both within language and across language.",
"The latter is to check whether a person with no prior knowledge of (the lexicon of) a given language can predict the gender of a user, and how that compares to an in-language setup and the machine.",
"If humans can predict gender cross-lingually, they are likely to rely on aspects beyond lexical information.",
"Data We obtain data from the TWISTY corpus , a multi-lingual collection of Twitter users, for the languages with 500+ users, namely Dutch, French, Portuguese, and Spanish.",
"We complement them with English, using data from a predecessor of TWISTY (Plank and Hovy, 2015) .",
"All datasets contain manually annotated gender information.",
"To simplify interpretation for the cross-language experiments, we balance gender in all datasets by downsampling to the minority class.",
"The datasets' final sizes are given in Table 2 .",
"We use 200 tweets per user, as done by previous work .",
"We leave the data untokenized to exclude any languagedependent processing, because original tokenization could preserve some signal.",
"Apart from mapping usernames to 'USER' and urls to 'URL' we do not perform any further data pre-processing.",
"Lexical vs Bleached Models We use the scikit-learn (Pedregosa et al., 2011) implementation of a linear SVM with default parameters (e.g., L2 regularization).",
"We use 10-fold cross validation for all in-language experiments.",
"For the cross-lingual experiments, we train on all available source language data and test on all target language data.",
"For the lexicalized experiments, we adopt the features from the best performing system at the latest PAN evaluation campaign 3 (Basile et al., 2017) (word 1-2 grams and character 3-6 grams).",
"For the multilingual embeddings model we use the mean embedding representation from the system of (Plank, 2017) and add max, std and coverage features.",
"We create multilingual embeddings by projecting monolingual embeddings to a single multilingual space for all five languages using a recently proposed SVD-based projection method with a pseudo-dictionary (Smith et al., 2017) .",
"The monolingual embeddings are trained on large amounts of in-house Twitter data (as much data as we had access to, i.e., ranging from 30M tweets for French to 1,500M tweets in Dutch, with a word type coverage between 63 and 77%).",
"This results in an embedding space with a vocabulary size of 16M word types.",
"All code is available at https:// github.com/bplank/bleaching-text.",
"For the bleached experiments, we ran models with each feature set separately.",
"In this paper, we report results for the model where all features are combined, as it proved to be the most robust across languages.",
"We tuned the n-gram size of this model through in-language cross-validation, finding that n = 5 performs best.",
"When testing across languages, we report accuracy for two setups: average accuracy over each single-language model (AVG), and accuracy obtained when training on the concatenation of all languages but the target one (ALL).",
"The latter setting is also used for the embeddings model.",
"We report accuracy for all experiments.",
"Results and Analysis Table 2 shows results for both the cross-language and in-language experiments in the lexical and abstract-feature setting.",
"Within language, the lexical features unsurprisingly work the best, achieving an average accuracy of 80.5% over all languages.",
"The abstract features lose some information and score on average 11.8% lower, still beating the majority baseline (50%) by a large margin (68.7%).",
"If we go across language, the lexical approaches break down (overall to 53.7% for LEX AVG/56.3% for ALL), except for Portuguese and Spanish, thanks to their similarities (see Table 3 for pair-wise results).",
"The closelyrelated-language effect is also observed when training on all languages, as scores go up when the classifier has access to the related language.",
"The same holds for the multilingual embeddings model.",
"On average it reaches an accuracy of 59.8%.",
"The closeness effect for Portuguese and Spanish can also be observed in language-to-language experiments, where scores for ES →PT and PT →ES are the highest.",
"Results for the lexical models are generally lower on English, which might be due to smaller amounts of data (see first column in Table 2 providing number of users per language).",
"The abstract features fare surprisingly well and Table 4 , we can see that the use of an emoji (like ) and shape-based features are predictive of female users.",
"Quotes, question marks and length features, for example, appear to be more predictive of male users.",
"Human Evaluation We experimented with three different conditions, one within language and two across language.",
"For the latter, we set up an experiment where native speakers of Dutch were presented with tweets written in Portuguese and were asked to guess the poster's gender.",
"In the other experiment, we asked speakers of French to identify the gender of the writer when reading Dutch tweets.",
"In both cases, the participants declared to have no prior knowledge of the target language.",
"For the in-language experiment, we asked Dutch speakers to identify the gender of a user writing Dutch tweets.",
"The Dutch speakers who participated in the two experiments are distinct individuals.",
"Participants were informed of the experiment's goal.",
"Their identity is anonymized in the data.",
"We selected a random sample of 200 users from the Dutch and Portuguese data, preserving a 50/50 gender distribution.",
"Each user was represented by twenty tweets.",
"The answer key (F/M) order was randomized.",
"For each of the three experiments we had six judges, balanced for gender, and obtained three annotations per target user.",
"Results and Analysis Inter-annotator agreement for the tasks was measured via Fleiss kappa (n = 3, N = 200), and was higher for the in-language experiment (K = 0.40) than for the cross-language tasks (NL →PT: K = 0.25; FR →NL: K = 0.28).",
"Table 5 shows accuracy against the gold labels, comparing humans (average accuracy over three annotators) to lexical and bleached models on the exact same subset of 200 users.",
"Systems were tested under two different conditions regarding the number of tweets per user for the target language: machine and human saw the exact same twenty tweets, or the full set of tweets (200) per user, as done during training (Section 3.1).",
"First of all, our results indicate that in-language performance of humans is 70.5%, which is quite in line with the findings of Flekova et al.",
"(2016) , who report an accuracy of 75% on English.",
"Within language, lexicalized models are superior to humans if exposed to enough information (200 tweets setup).",
"One explanation for this might lie in an observation by Flekova et al.",
"(2016) , according to which people tend to rely too much on stereotypical lexical indicators when assigning gender to the poster of a tweet, while machines model less evident patterns.",
"Lexicalized models are also superior to the bleached ones, as already seen on the full datasets (Table 2) .",
"We can also observe that the amount of information available to represent a user influences system's performance.",
"Training on 200 tweets per user, but testing on 20 tweets only, decreases performance by 12 percentage points.",
"This is likely due to the fact that inputs are sparser, especially since the bleached model is trained on 5-grams.",
"5 The bleached model, when given 200 tweets per user, yields a performance that is slightly higher than human accuracy.",
"In the cross-language setting, the picture is very different.",
"Here, human performance is superior to the lexicalized models, independently of the amount of tweets per user at testing time.",
"This seems to indicate that if humans cannot rely on the lexicon, they might be exploiting some other signal when guessing the gender of a user who tweets in a language unknown to them.",
"Interestingly, the bleached models, which rely on non-lexical features, not only outperform the lexicalized ones in the cross-language experiments, but also neatly match the human scores.",
"Related Work Most existing work on gender prediction exploits shallow lexical information based on the linguistic production of the users.",
"Few studies investigate deeper syntactic information (Koppel et al., 2002; Feng et al., 2012) or non-linguistic input, e.g., language-independent clues such as visual (Alowibdi et al., 2013) or network information (Jurgens, 2013; Plank and Hovy, 2015; Ljubešić et al., 2017) .",
"A related angle is cross-genre profiling.",
"In both settings lexical models have limited portability due to their bias towards the language/genre they have been trained on (Rangel et al., 2016; Busger op Vollenbroek et al., 2016; .",
"Lexical bias has been shown to affect inlanguage human gender prediction, too.",
"Flekova et al.",
"(2016) found that people tend to rely too much on stereotypical lexical indicators, while Nguyen et al.",
"(2014) show that more than 10% of the Twitter users do actually not employ words that the crowd associates with their biological sex.",
"Our features abstract away from such lexical cues while retaining predictive signal.",
"Conclusions Bleaching text into abstract features is surprisingly effective for predicting gender, though lexical infor- 5 We experimented with training on 20 tweets rather than 200, and with different n-gram sizes (e.g., 1-4).",
"Despite slightly better results, we decided to use the trained models as they were to employ the same settings across all experiments (200 tweets per users, n = 5), with no further tuning.",
"mation is still more useful within language (RQ1).",
"However, models based on lexical clues fail when transferred to other languages, or require large amounts of unlabeled data from a similar domain as our experiments with the multilingual embedding model indicate.",
"Instead, our bleached models clearly capture some signal beyond the lexicon, and perform well in a cross-lingual setting (RQ2).",
"We are well aware that we are testing our crosslanguage bleached models in the context of closely related languages.",
"While some features (such as PunctA, or Frequency) might carry over to genetically more distant languages, other features (such as Vowels and Shape) would probably be meaningless.",
"Future work on this will require a sensible setting from a language typology perspective for choosing and testing adequate features.",
"In our novel study on human proficiency for cross-lingual gender prediction, we discovered that people are also abstracting away from the lexicon.",
"Indeed, we observe that they are able to detect gender by looking at tweets in a language they do not know (RQ3) with an accuracy of 60% on average."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5"
],
"paper_header_content": [
"Introduction",
"Profiling with Abstract Features",
"Experiments",
"Lexical vs Bleached Models",
"Human Evaluation",
"Related Work",
"Conclusions"
]
}
|
GEM-SciDuet-train-52#paper-1090#slide-12
|
Language to language feature analysis
|
EN NL FR PT ES
Le ge nd v ow els s hap e p un ctC p un ctA le ngth frequency all
|
EN NL FR PT ES
Le ge nd v ow els s hap e p un ctC p un ctA le ngth frequency all
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-0
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-0
|
Task retrieval based chatbots
|
Given a message, find most suitable responses
Large repository of message-response pairs
Take it as a search problem
Retrieval Feature generation Ranking
Context-response matching Learning to rank
|
Given a message, find most suitable responses
Large repository of message-response pairs
Take it as a search problem
Retrieval Feature generation Ranking
Context-response matching Learning to rank
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-1
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-1
|
Related Work
|
Previous works focus on network architectures.
CNN, RNN, syntactic based neural networks .
CNN, RNN, attention mechanism
These models are data hungry, so they are trained on large scale State-of-the-art multi-turn architecture (Wu et al. ACL 2017) negative sampled dataset.
|
Previous works focus on network architectures.
CNN, RNN, syntactic based neural networks .
CNN, RNN, attention mechanism
These models are data hungry, so they are trained on large scale State-of-the-art multi-turn architecture (Wu et al. ACL 2017) negative sampled dataset.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-2
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-2
|
Background Loss Function
|
Cross Entropy Loss (Pointwise loss) Hinge Loss (Pairwise loss)
|
Cross Entropy Loss (Pointwise loss) Hinge Loss (Pairwise loss)
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-3
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-3
|
Background traditional training method
|
Given a (Q,R) pair, we first randomly sampled N instances
Update the designed model with the use of point-wise cross entropy loss.
Test model on human annotation data.
1. Most of the randomly sampled responses are far from the semantics of the messages or the contexts.
2. Some of randomly sampled responses are false negatives which pollute the training data as noise.
|
Given a (Q,R) pair, we first randomly sampled N instances
Update the designed model with the use of point-wise cross entropy loss.
Test model on human annotation data.
1. Most of the randomly sampled responses are far from the semantics of the messages or the contexts.
2. Some of randomly sampled responses are false negatives which pollute the training data as noise.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-4
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-4
|
Challenges of Response Selection in Chatbots
|
Negative sampling oversimplifies response selection task in the training phrase.
Train: Given a utterance, positive responses are collected from human conversations, but negative ones are negative sampled.
Test: Given a utterance, a bunch of responses are returned by a search engine. Human annotators are asked to label these responses.
Human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.
|
Negative sampling oversimplifies response selection task in the training phrase.
Train: Given a utterance, positive responses are collected from human conversations, but negative ones are negative sampled.
Test: Given a utterance, a bunch of responses are returned by a search engine. Human annotators are asked to label these responses.
Human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-5
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-5
|
Our Idea
|
The margin in our loss is dynamic.
R is the ground-truth response, and R_i is a retrieved instance. is a confidence score for each instance. Our method encourages the model to be more confident to classify a response with a high as a negative one.
|
The margin in our loss is dynamic.
R is the ground-truth response, and R_i is a retrieved instance. is a confidence score for each instance. Our method encourages the model to be more confident to classify a response with a high as a negative one.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-6
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-6
|
How to calculate the dynamic margin
|
We employ a Seq2Seq model to compute
Seq2Seq model is a unsupervised model.
It is able to compute a conditional probability likelihood without human annotation.
|
We employ a Seq2Seq model to compute
Seq2Seq model is a unsupervised model.
It is able to compute a conditional probability likelihood without human annotation.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-7
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-7
|
A new training method
|
Pre-train the matching model with negative sampling and cross entropy loss.
Given a (Q,R) pair, retrieve N instances from a
Update the designed model with the dynamic hinge loss.
Test model on human annotation da pre-defined index.
The pre-training process enables the matching model to distinguish semantically far away responses.
1. Oversimplification problem of the negative sampling approach can be partially mitigated. 2. We can avoid false negative examples and true negative examples are treated equally during training
|
Pre-train the matching model with negative sampling and cross entropy loss.
Given a (Q,R) pair, retrieve N instances from a
Update the designed model with the dynamic hinge loss.
Test model on human annotation da pre-defined index.
The pre-training process enables the matching model to distinguish semantically far away responses.
1. Oversimplification problem of the negative sampling approach can be partially mitigated. 2. We can avoid false negative examples and true negative examples are treated equally during training
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-8
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-8
|
Dataset
|
Over 4 million post-response pairs (true response) in Weibo for training.
The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in good and bad.
Douban Conversation Corpus (Wu et al., 2017)
0.5 million context-response (true response) pairs for training
In the test set, every context has 10 response candidates, and each of the response has a label good or bad judged by human annotators.
|
Over 4 million post-response pairs (true response) in Weibo for training.
The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in good and bad.
Douban Conversation Corpus (Wu et al., 2017)
0.5 million context-response (true response) pairs for training
In the test set, every context has 10 response candidates, and each of the response has a label good or bad judged by human annotators.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-10
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-10
|
Ablation Test
|
+WSrand: negative samples are randomly generated.
+const: the marginal in the loss function is a static number.
+WS: Our full model
|
+WSrand: negative samples are randomly generated.
+const: the marginal in the loss function is a static number.
+WS: Our full model
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-11
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-11
|
More Findings
|
Updating the Seq2Seq model is not beneficial to the discriminator.
The number of negative instances is an important hyper- parameter for our model.
|
Updating the Seq2Seq model is not beneficial to the discriminator.
The number of negative instances is an important hyper- parameter for our model.
|
[] |
GEM-SciDuet-train-53#paper-1093#slide-12
|
1093
|
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots
|
We propose a method that can leverage unlabeled data to learn a matching model for response selection in retrieval-based chatbots. The method employs a sequence-tosequence architecture (Seq2Seq) model as a weak annotator to judge the matching degree of unlabeled pairs, and then performs learning with both the weak signals and the unlabeled data. Experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method.
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"paper_content_text": [
"Introduction Recently, more and more attention from both academia and industry is paying to building nontask-oriented chatbots that can naturally converse with humans on any open domain topics.",
"Existing approaches can be categorized into generationbased methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Sordoni et al., 2015; Serban et al., 2017; Xing et al., 2018) which synthesize a response with natural language generation techniques, and retrievalbased methods (Hu et al., 2014; Lowe et al., 2015; Zhou et al., 2016; which select a response from a pre-built index.",
"In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft (Shum et al., 2018) and the E-commerce assistant AliMe Assist from Alibaba Group .",
"* Corresponding Author A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message (Hu et al., 2014) or a conversational context consisting of multiple utterances .",
"While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available.",
"In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training.",
"Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples.",
"This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise.",
"As a result, there often exists a significant gap between the performance of a model in training and the same model in practice (Wang et al., 2015; .",
"1 We propose a new method that can effectively leverage unlabeled data for learning matching models.",
"To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index.",
"Then, we employ a weak annotator to provide matching signals for the unlabeled inputresponse pairs, and leverage the signals to supervise the learning of matching models.",
"The weak annotator is pre-trained from large scale humanhuman conversations without any annotations, and thus a Seq2Seq model becomes a natural choice.",
"Our approach is compatible with any matching models, and falls in a teacher-student framework (Hinton et al., 2015) where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models.",
"Broadly speaking, both of (Hinton et al., 2015) and our work let a neural network supervise the learning of another network.",
"An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores.",
"Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short.",
"Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing.",
"We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy.",
"Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets.",
"Approach The Existing Learning Approach Given a data set D = {x i , (y i,1 , .",
".",
".",
", y i,n )} N i=1 with x i a message or a conversational context and y i,j a response candidate of x i , we aim to learn a matching model M(·, ·) from D. Thus, for any new pair (x, y), M(x, y) measures the matching degree between x and y.",
"To obtain a matching model, one has to deal with two problems: (1) how to define M(·, ·); and (2) how to perform learning.",
"Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM (Lowe et al., 2015) , Multi-View LSTM (Zhou et al., 2016) , CNN , and Sequential Matching Network , but adopts a simple strategy for Problem (2): ∀x i , a human response is designated as y i,1 with a label 1, and some randomly sampled responses are treated as (y i,2 , .",
".",
".",
", y i,n ) with labels 0.",
"M(·, ·) is then learned by maximizing the following objective: N i=1 n j=1 [ri,j log(M(xi, yi,j)) + (1 − ri,j) log(1 − M(xi, yi,j))] , (1) where r i,j ∈ {0, 1} is a label.",
"While matching accuracy can be improved by carefully designing M(·, ·) , the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled y i,j are semantically far from x i which may cause an undesired decision boundary at the end of optimization; some y i,j are false negatives.",
"As hard zero-one labels are adopted in Equation (1) , these false negatives may mislead the learning algorithm.",
"The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data.",
"A New Learning Method As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model.",
"Specifically, instead of random sampling, we construct D by retrieving (y i,2 , .",
".",
".",
", y i,n ) from an index (y i,1 is the human response of x i ).",
"By this means, some y i,j are true positives, and some are negatives but semantically close to x i .",
"After that, we employ a weak annotator G(·, ·) to indicate the matching degree of every (x i , y i,j ) in D as weak supervision signals.",
"Let s ij = G(x i , y i,j ) , then the learning approach can be formulated as: arg min M(·,·) N i=1 n j=1 max(0, M(xi, yi,j) − M(xi, yi,1) + s i,j ), (2) where s ij is a normalized weak signal defined as max(0, s i,j s i,1 − 1).",
"The normalization here eliminates bias from different x i .",
"Objective (2) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by G(·, ·) (as will be seen later, s i,j s i,1 > 1).",
"The learning approach simulates how we build a matching model in a retrievalbased chatbot: given {x i }, some response candidates are first retrieved from an index.",
"Then human annotators are hired to judge the matching degree of each pair.",
"Finally, both the data and the human labels are fed to an optimization program for model training.",
"Here, we replace the expensive human labels with cheap judgment from G(·, ·).",
"We define G(·, ·) as a sequence-to-sequence architecture (Vinyals and Le, 2015) with an attention mechanism (Bahdanau et al., 2015) , and pre-train it with large amounts of human-human conversa-tion data.",
"The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (2).",
"s ij is then defined as the likelihood of generating y i,j from x i : sij = k log[p(w y i,j ,k , |xi, w y i,j ,l<k )], (3) where w y i,j ,k is the k-th word of y i,j and w y i,j ,l<k is the word sequence before w y i,j ,k .",
"Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated.",
"We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples.",
"Equation (2) turns the hard zero-one labels in Equation (1) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high s i,j score as a negative one.",
"In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction.",
"It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs (Goodfellow et al., 2014) in principle.",
"GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model (Lu et al., 2017) .",
"Our approach is also different from those semi-supervised approaches in the teacher-student framework (Dehghani et al., 2017a,b) , as there are no labeled data in learning.",
"Experiment We conduct experiments on two public data sets: STC data set (Wang et al., 2013) for single-turn response selection and Douban Conversation Corpus for multi-turn response selection.",
"Note that we do not test the proposed approach on Ubuntu Corpus (Lowe et al., 2015) , because both training and test data in the corpus are constructed by random sampling.",
"Implementation Details We implement our approach with TensorFlow.",
"In both experiments, the same Seq2Seq model is exploited which is trained with 3.3 million inputresponse pairs extracted from the training set of the Douban data.",
"Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ({u <i }, u i ).",
"We set the vocabulary size as 30, 000, the hidden vector size as 1024, and the embedding size as 620.",
"Optimization is conducted with stochastic gradient descent (Bottou, 2010) , and is terminated when perplexity on a validation set (170k pairs) does not decrease in 3 consecutive epochs.",
"In optimization of Objective (2), we initialize M(·, ·) with a model trained under Objective (1) with the (random) negative sampling strategy, and fix word embeddings throughout training.",
"This can stabilize the learning process.",
"The learning rate is fixed as 0.1.",
"Single-turn Response Selection Experiment settings: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo 2 .",
"The training set contains 4.8 million post-response (true response) pairs.",
"The test set consists of 422 posts with each one associated with around 30 responses labeled by human annotators in \"good\" and \"bad\".",
"In total, there are 12, 402 labeled pairs in the test data.",
"Following (Wang et al., 2013 (Wang et al., , 2015 , we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by 5-fold cross validation.",
"Precision at position 1 (P@1) is employed as an evaluation metric.",
"In addition to the models compared on the data in the existing literatures, we also implement dual LSTM (Lowe et al., 2015) as a baseline.",
"As case studies, we learn a dual LSTM and an CNN (Hu et al., 2014) with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively.",
"When constructing D, we build an index with the training data using Lucene 3 and retrieve 9 candidates (i.e., {y i,2 , .",
".",
".",
", y i,n }) for each post with the inline algorithm of the index.",
"We form a validation set by randomly sampling 10 thousand posts associated with the responses from D (human response is positive and others are treated as negative).",
"Results: Table 1 reports the results.",
"We can see P@1 TFIDF (Wang et al., 2013) 0.574 +Translation (Wang et al., 2013) 0.587 +WordEmbedding 0.579 +DeepMatchtopic 0.587 +DeepMatchtree (Wang et al., 2015) 0.608 +LSTM (Lowe et al., 2015) 0.592 +LSTM+WS 0.616 +CNN (Hu et al., 2014) 0.585 +CNN+WS 0.604 Table 1 : Results on STC that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (ttest with p-value < 0.01).",
"LSTM+WS even surpasses the best performing model, DeepMatch tree , reported on this data.",
"These results indicate the usefulness of the proposed approach in practice.",
"One can expect improvements to models like DeepMatch tree with the new learning method.",
"We leave the verification as future work.",
"Multi-turn Response Selection Experiment settings: Douban Conversation Corpus contains 0.5 million context-response (true response) pairs for training and 1000 contexts for test.",
"In the test set, every context has 10 response candidates, and each of the response has a label \"good\" or \"bad\" judged by human annotators.",
"Mean average precision (MAP) (Baeza-Yates et al., 1999) , mean reciprocal rank (MRR) (Voorhees, 1999) , and precision at position 1 (P@1) are employed as evaluation metrics.",
"We copy the numbers reported in for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach.",
"We build an index with the training data, and retrieve 9 candidates with the method in for each context when constructing D. 10 thousand pairs are sampled from D as a validation set.",
"Results: Table 2 reports the results.",
"Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with p-value < 0.01).",
"Discussion Ablation studies: we first replace the weak supervision s i,j in Equation (2) 0.488 0.527 0.330 LSTM (Lowe et al., 2015) 0.485 0.527 0.320 LSTM+WS 0.519 0.559 0.359 Multi-View (Zhou et al., 2016) 0.505 0.543 0.342 Multi-View+WS 0.534 0.575 0.378 SMN 0.526 0.571 0.393 SMN+WS 0.565 0.609 0.421 everything the same as our approach but replace D with a set constructed by random sampling, denoted as model+WSrand.",
"Table 3 reports the results.",
"We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach.",
"Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning.",
"Does updating the Seq2Seq model help?",
"It is well known that Seq2Seq models suffer from the \"safe response\" (Li et al., 2016a) problem, which may bias the weak supervision signals to high-frequency responses.",
"Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved.",
"Specifically, we update the Seq2Seq model every 20 mini-batches with the policy-based reinforcement learning approach proposed in (Li et al., 2016b) .",
"The reward is defined as the matching score of a context and a response given by the matching model.",
"Unfortunately, we do not observe significant improvement on the matching model.",
"The result is attributed to two factors: (1) it is difficult to significantly im-prove the Seq2Seq model with a policy gradient based method; and (2) eliminating \"safe response\" for Seq2Seq model cannot help a matching model to learn a better decision boundary.",
"How the number of response candidates affects learning: we vary the number of {y i,j } n j=1 in D in {2, 5, 10, 20} and study how the hyperparameter influences learning.",
"We study with LSTM on the STC data and SMN on the Douban data.",
"Table 4 reports the results.",
"We can see that as the number of candidates increases, the performance of the the learned models becomes better.",
"Even with 2 candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models.",
"Conclusion and Future Work Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process.",
"In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection.",
"By this means, we can mine hard instances for matching model and give them scores with a weak annotator.",
"Experimental results on public data sets verify the effectiveness of the new learning approach.",
"In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach."
]
}
|
{
"paper_header_number": [
"1",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"The Existing Learning Approach",
"A New Learning Method",
"Experiment",
"Implementation Details",
"Single-turn Response Selection",
"Multi-turn Response Selection",
"Discussion",
"Conclusion and Future Work"
]
}
|
GEM-SciDuet-train-53#paper-1093#slide-12
|
Conclusion
|
We study a less explored problem in retrieval-based chatbots.
We propose of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots.
We empirically verify the effectiveness of the method on public data sets.
|
We study a less explored problem in retrieval-based chatbots.
We propose of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots.
We empirically verify the effectiveness of the method on public data sets.
|
[] |
GEM-SciDuet-train-54#paper-1096#slide-0
|
1096
|
The Language of Legal and Illegal Activity on the Darknet
|
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.",
"The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.",
"Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .",
"Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.",
"However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.",
"In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.",
"Our data is available upon request.",
"fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.",
"(2018) , but they too did not investigate in what ways these two classes differ.",
"This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.",
"We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.",
"We find a number of distinguishing features.",
"First, we confirm the results of Avarikioti et al.",
"(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.",
"Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).",
"This indicates that the two classes are different in terms of their syntactic structure.",
"Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).",
"The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.",
"Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.",
"This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.",
"By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.",
"After discussing previous works in Section 2, we detail the datasets used in Section 3.",
"Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).",
"Section 6 presents additional experiments, which explore cross-domain classification.",
"We further analyze and discuss the findings in Section 7.",
"Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.",
"For example, Biryukov et al.",
"(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.",
"Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.",
"While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.",
"Some works directly addressed a specific type of illegality and a particular communication context.",
"Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.",
"The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.",
"Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.",
"Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.",
"Al Nabki et al.",
"(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.",
"For some of the categories, legal and illegal activities are distinguished.",
"However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.",
"Al Nabki et al.",
"(2019) extended the dataset to form DUTA-10K, which we use here.",
"Their results show that 20% of the hidden services correspond to \"suspicious\" activities.",
"The analysis was conducted using the text classifier presented in Al Nabki et al.",
"(2017) and manual verification.",
"Recently, Avarikioti et al.",
"(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.",
"The experiments were performed on a newly crawled corpus obtained by recursive search.",
"The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.",
"Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.",
"They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.",
"Using the dataset of Al Nabki et al.",
"(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.",
"Datasets Used Onion corpus.",
"We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .",
"We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.",
"These websites advertise and sell drugs, often to international customers.",
"While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.",
"These pages are directed by sellers to their customers.",
"eBay corpus.",
"As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.",
"eBay is one of the largest hosting sites for retail sellers of various goods.",
"Our corpus contains 118 item descriptions, each consisting of more than one sentence.",
"Item descriptions vary in price, item sold and seller.",
"The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.",
"For example, where many sell the same product, only one example was added to the corpus.",
"Search queries also included filtering for price, so that each query resulted with different items.",
"Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.",
"Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.",
"Cleaning.",
"As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.",
"HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.",
"We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).",
"We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.",
"\"Showing all 9 results\").",
"Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.",
"Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .",
"While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.",
"This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).",
"We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.",
"Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).",
"Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.",
"The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.",
"We refer to this measure as \"self-distance\".",
"Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.",
"pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.",
"Similar results are obtained using Variational distance, and are omitted for brevity.",
"These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.",
"Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.",
"In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.",
"Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.",
"Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.",
"5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .",
"For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.",
"The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.",
"We also report the standard error for each average.",
"According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.",
"For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.",
"Onion named entities is comparable and relatively low.",
"However, sites selling legal drugs on Onion have a much higher Wikification percentage.",
"Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.",
"However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.",
"These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.",
"In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.",
"Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.",
"In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.",
"6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.",
"Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.",
"Another goal of the classification task is to confirm our finding that the domains are distinguishable.",
"Experimental setup.",
"We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.",
"We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.",
"Model.",
"To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.",
"This simple classifier features frequently in work on text classification in the Darknet.",
"• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.",
"• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .",
"BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.",
"The word vectors are not updated during training.",
"Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).",
"• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).",
"• attention: we replace the word representations with contextualized pre-trained representations from ELMo .",
"We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.",
"This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .",
"For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .",
"We use the AllenNLP library 8 to implement the neural network classifiers.",
"7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.",
"In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).",
"Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).",
"For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.",
"Settings.",
"We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.",
"• Training and testing on Legal Onion vs.",
"Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.",
"Results The accuracy scores for the different classifiers and settings are reported in Table 3 .",
"confirmed by the drop in accuracy when content words are removed.",
"However, in this setting (drop cont.",
"), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.",
"Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.",
"This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.",
"It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).",
"Legal vs. illegal drugs.",
"Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.",
"However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).",
"This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.",
"Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.",
"The forums contain user-written text in various topics.",
"Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.",
"As this domain contains usergenerated content, it is more varied and noisy.",
"Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.",
"We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.",
"• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.",
"This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.",
"Results Accuracy scores are reported in Table 4 .",
"Legal vs. illegal forums.",
"Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.",
"However, the SVM model achieves an accuracy of 85.3% in the full setting.",
"Good performance is presented by this model even in the cases where the content words are dropped (drop.",
"cont.)",
"or replaced by part-of-speech tags (pos cont.",
"), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.",
"Cross-domain evaluation.",
"Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.",
"This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.",
"This also shows that illegal texts in Tor share common properties regardless of topical category.",
"The much lower results obtained by the models where content words are dropped (drop cont.)",
"or converted to POS tags (pos cont.",
"), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.",
"Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.",
"Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.",
"This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.",
"This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.",
"Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.",
"Analysis of texts from the datasets.",
"Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.",
"Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.",
"Analysis of manipulated texts.",
"Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"",
"in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.",
"However, the SVM model does manage to distinguish between the texts even in this setting.",
"Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.",
"Analysis of learned feature weights.",
"As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.",
"Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.",
"Illegal Onion classification in this setting.",
"Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.",
"Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.",
"Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.",
"Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).",
"Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.",
"We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"6.1",
"6.2",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Datasets Used",
"Domain Differences",
"Vocabulary Differences",
"Differences in Named Entities",
"Classification Experiments",
"Results",
"Illegality Detection Across Domains",
"Experimental setup",
"Results",
"Discussion",
"Conclusion"
]
}
|
GEM-SciDuet-train-54#paper-1096#slide-0
|
Introduction Darknet
|
Used interchangeably in this work:
Tor network (Tor: an encrypted browser)
Onion network (.onion top-level domain)
Hosts: onion services (hidden services).
|
Used interchangeably in this work:
Tor network (Tor: an encrypted browser)
Onion network (.onion top-level domain)
Hosts: onion services (hidden services).
|
[] |
GEM-SciDuet-train-54#paper-1096#slide-2
|
1096
|
The Language of Legal and Illegal Activity on the Darknet
|
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.",
"The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.",
"Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .",
"Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.",
"However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.",
"In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.",
"Our data is available upon request.",
"fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.",
"(2018) , but they too did not investigate in what ways these two classes differ.",
"This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.",
"We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.",
"We find a number of distinguishing features.",
"First, we confirm the results of Avarikioti et al.",
"(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.",
"Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).",
"This indicates that the two classes are different in terms of their syntactic structure.",
"Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).",
"The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.",
"Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.",
"This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.",
"By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.",
"After discussing previous works in Section 2, we detail the datasets used in Section 3.",
"Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).",
"Section 6 presents additional experiments, which explore cross-domain classification.",
"We further analyze and discuss the findings in Section 7.",
"Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.",
"For example, Biryukov et al.",
"(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.",
"Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.",
"While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.",
"Some works directly addressed a specific type of illegality and a particular communication context.",
"Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.",
"The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.",
"Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.",
"Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.",
"Al Nabki et al.",
"(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.",
"For some of the categories, legal and illegal activities are distinguished.",
"However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.",
"Al Nabki et al.",
"(2019) extended the dataset to form DUTA-10K, which we use here.",
"Their results show that 20% of the hidden services correspond to \"suspicious\" activities.",
"The analysis was conducted using the text classifier presented in Al Nabki et al.",
"(2017) and manual verification.",
"Recently, Avarikioti et al.",
"(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.",
"The experiments were performed on a newly crawled corpus obtained by recursive search.",
"The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.",
"Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.",
"They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.",
"Using the dataset of Al Nabki et al.",
"(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.",
"Datasets Used Onion corpus.",
"We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .",
"We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.",
"These websites advertise and sell drugs, often to international customers.",
"While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.",
"These pages are directed by sellers to their customers.",
"eBay corpus.",
"As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.",
"eBay is one of the largest hosting sites for retail sellers of various goods.",
"Our corpus contains 118 item descriptions, each consisting of more than one sentence.",
"Item descriptions vary in price, item sold and seller.",
"The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.",
"For example, where many sell the same product, only one example was added to the corpus.",
"Search queries also included filtering for price, so that each query resulted with different items.",
"Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.",
"Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.",
"Cleaning.",
"As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.",
"HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.",
"We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).",
"We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.",
"\"Showing all 9 results\").",
"Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.",
"Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .",
"While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.",
"This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).",
"We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.",
"Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).",
"Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.",
"The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.",
"We refer to this measure as \"self-distance\".",
"Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.",
"pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.",
"Similar results are obtained using Variational distance, and are omitted for brevity.",
"These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.",
"Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.",
"In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.",
"Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.",
"Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.",
"5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .",
"For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.",
"The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.",
"We also report the standard error for each average.",
"According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.",
"For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.",
"Onion named entities is comparable and relatively low.",
"However, sites selling legal drugs on Onion have a much higher Wikification percentage.",
"Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.",
"However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.",
"These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.",
"In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.",
"Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.",
"In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.",
"6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.",
"Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.",
"Another goal of the classification task is to confirm our finding that the domains are distinguishable.",
"Experimental setup.",
"We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.",
"We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.",
"Model.",
"To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.",
"This simple classifier features frequently in work on text classification in the Darknet.",
"• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.",
"• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .",
"BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.",
"The word vectors are not updated during training.",
"Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).",
"• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).",
"• attention: we replace the word representations with contextualized pre-trained representations from ELMo .",
"We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.",
"This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .",
"For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .",
"We use the AllenNLP library 8 to implement the neural network classifiers.",
"7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.",
"In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).",
"Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).",
"For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.",
"Settings.",
"We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.",
"• Training and testing on Legal Onion vs.",
"Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.",
"Results The accuracy scores for the different classifiers and settings are reported in Table 3 .",
"confirmed by the drop in accuracy when content words are removed.",
"However, in this setting (drop cont.",
"), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.",
"Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.",
"This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.",
"It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).",
"Legal vs. illegal drugs.",
"Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.",
"However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).",
"This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.",
"Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.",
"The forums contain user-written text in various topics.",
"Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.",
"As this domain contains usergenerated content, it is more varied and noisy.",
"Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.",
"We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.",
"• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.",
"This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.",
"Results Accuracy scores are reported in Table 4 .",
"Legal vs. illegal forums.",
"Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.",
"However, the SVM model achieves an accuracy of 85.3% in the full setting.",
"Good performance is presented by this model even in the cases where the content words are dropped (drop.",
"cont.)",
"or replaced by part-of-speech tags (pos cont.",
"), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.",
"Cross-domain evaluation.",
"Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.",
"This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.",
"This also shows that illegal texts in Tor share common properties regardless of topical category.",
"The much lower results obtained by the models where content words are dropped (drop cont.)",
"or converted to POS tags (pos cont.",
"), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.",
"Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.",
"Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.",
"This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.",
"This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.",
"Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.",
"Analysis of texts from the datasets.",
"Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.",
"Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.",
"Analysis of manipulated texts.",
"Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"",
"in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.",
"However, the SVM model does manage to distinguish between the texts even in this setting.",
"Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.",
"Analysis of learned feature weights.",
"As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.",
"Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.",
"Illegal Onion classification in this setting.",
"Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.",
"Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.",
"Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.",
"Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).",
"Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.",
"We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"6.1",
"6.2",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Datasets Used",
"Domain Differences",
"Vocabulary Differences",
"Differences in Named Entities",
"Classification Experiments",
"Results",
"Illegality Detection Across Domains",
"Experimental setup",
"Results",
"Discussion",
"Conclusion"
]
}
|
GEM-SciDuet-train-54#paper-1096#slide-2
|
Introduction Drugs
|
Finest organic cannabis grown by proffessional growers in the netherlands.
We double seal all packages for odor less delivery.
Shipping within 24 hours!
EUR = X Buy now
5g Banana Daniel Hershcovich Kush 45 EUR = 0.075 X Buy now
|
Finest organic cannabis grown by proffessional growers in the netherlands.
We double seal all packages for odor less delivery.
Shipping within 24 hours!
EUR = X Buy now
5g Banana Daniel Hershcovich Kush 45 EUR = 0.075 X Buy now
|
[] |
GEM-SciDuet-train-54#paper-1096#slide-3
|
1096
|
The Language of Legal and Illegal Activity on the Darknet
|
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.",
"The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.",
"Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .",
"Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.",
"However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.",
"In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.",
"Our data is available upon request.",
"fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.",
"(2018) , but they too did not investigate in what ways these two classes differ.",
"This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.",
"We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.",
"We find a number of distinguishing features.",
"First, we confirm the results of Avarikioti et al.",
"(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.",
"Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).",
"This indicates that the two classes are different in terms of their syntactic structure.",
"Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).",
"The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.",
"Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.",
"This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.",
"By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.",
"After discussing previous works in Section 2, we detail the datasets used in Section 3.",
"Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).",
"Section 6 presents additional experiments, which explore cross-domain classification.",
"We further analyze and discuss the findings in Section 7.",
"Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.",
"For example, Biryukov et al.",
"(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.",
"Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.",
"While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.",
"Some works directly addressed a specific type of illegality and a particular communication context.",
"Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.",
"The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.",
"Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.",
"Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.",
"Al Nabki et al.",
"(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.",
"For some of the categories, legal and illegal activities are distinguished.",
"However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.",
"Al Nabki et al.",
"(2019) extended the dataset to form DUTA-10K, which we use here.",
"Their results show that 20% of the hidden services correspond to \"suspicious\" activities.",
"The analysis was conducted using the text classifier presented in Al Nabki et al.",
"(2017) and manual verification.",
"Recently, Avarikioti et al.",
"(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.",
"The experiments were performed on a newly crawled corpus obtained by recursive search.",
"The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.",
"Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.",
"They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.",
"Using the dataset of Al Nabki et al.",
"(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.",
"Datasets Used Onion corpus.",
"We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .",
"We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.",
"These websites advertise and sell drugs, often to international customers.",
"While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.",
"These pages are directed by sellers to their customers.",
"eBay corpus.",
"As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.",
"eBay is one of the largest hosting sites for retail sellers of various goods.",
"Our corpus contains 118 item descriptions, each consisting of more than one sentence.",
"Item descriptions vary in price, item sold and seller.",
"The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.",
"For example, where many sell the same product, only one example was added to the corpus.",
"Search queries also included filtering for price, so that each query resulted with different items.",
"Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.",
"Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.",
"Cleaning.",
"As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.",
"HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.",
"We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).",
"We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.",
"\"Showing all 9 results\").",
"Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.",
"Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .",
"While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.",
"This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).",
"We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.",
"Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).",
"Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.",
"The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.",
"We refer to this measure as \"self-distance\".",
"Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.",
"pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.",
"Similar results are obtained using Variational distance, and are omitted for brevity.",
"These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.",
"Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.",
"In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.",
"Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.",
"Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.",
"5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .",
"For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.",
"The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.",
"We also report the standard error for each average.",
"According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.",
"For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.",
"Onion named entities is comparable and relatively low.",
"However, sites selling legal drugs on Onion have a much higher Wikification percentage.",
"Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.",
"However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.",
"These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.",
"In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.",
"Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.",
"In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.",
"6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.",
"Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.",
"Another goal of the classification task is to confirm our finding that the domains are distinguishable.",
"Experimental setup.",
"We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.",
"We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.",
"Model.",
"To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.",
"This simple classifier features frequently in work on text classification in the Darknet.",
"• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.",
"• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .",
"BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.",
"The word vectors are not updated during training.",
"Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).",
"• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).",
"• attention: we replace the word representations with contextualized pre-trained representations from ELMo .",
"We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.",
"This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .",
"For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .",
"We use the AllenNLP library 8 to implement the neural network classifiers.",
"7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.",
"In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).",
"Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).",
"For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.",
"Settings.",
"We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.",
"• Training and testing on Legal Onion vs.",
"Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.",
"Results The accuracy scores for the different classifiers and settings are reported in Table 3 .",
"confirmed by the drop in accuracy when content words are removed.",
"However, in this setting (drop cont.",
"), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.",
"Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.",
"This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.",
"It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).",
"Legal vs. illegal drugs.",
"Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.",
"However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).",
"This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.",
"Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.",
"The forums contain user-written text in various topics.",
"Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.",
"As this domain contains usergenerated content, it is more varied and noisy.",
"Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.",
"We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.",
"• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.",
"This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.",
"Results Accuracy scores are reported in Table 4 .",
"Legal vs. illegal forums.",
"Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.",
"However, the SVM model achieves an accuracy of 85.3% in the full setting.",
"Good performance is presented by this model even in the cases where the content words are dropped (drop.",
"cont.)",
"or replaced by part-of-speech tags (pos cont.",
"), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.",
"Cross-domain evaluation.",
"Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.",
"This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.",
"This also shows that illegal texts in Tor share common properties regardless of topical category.",
"The much lower results obtained by the models where content words are dropped (drop cont.)",
"or converted to POS tags (pos cont.",
"), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.",
"Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.",
"Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.",
"This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.",
"This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.",
"Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.",
"Analysis of texts from the datasets.",
"Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.",
"Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.",
"Analysis of manipulated texts.",
"Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"",
"in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.",
"However, the SVM model does manage to distinguish between the texts even in this setting.",
"Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.",
"Analysis of learned feature weights.",
"As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.",
"Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.",
"Illegal Onion classification in this setting.",
"Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.",
"Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.",
"Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.",
"Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).",
"Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.",
"We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"6.1",
"6.2",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Datasets Used",
"Domain Differences",
"Vocabulary Differences",
"Differences in Named Entities",
"Classification Experiments",
"Results",
"Illegality Detection Across Domains",
"Experimental setup",
"Results",
"Discussion",
"Conclusion"
]
}
|
GEM-SciDuet-train-54#paper-1096#slide-3
|
Introduction Language of the Darknet
|
How well do NLP tools work on Darknet text?
Can we automatically identify illegal activity?
Disclaimer: variations among legal systems, societies and groups.
|
How well do NLP tools work on Darknet text?
Can we automatically identify illegal activity?
Disclaimer: variations among legal systems, societies and groups.
|
[] |
GEM-SciDuet-train-54#paper-1096#slide-4
|
1096
|
The Language of Legal and Illegal Activity on the Darknet
|
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.",
"The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.",
"Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .",
"Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.",
"However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.",
"In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.",
"Our data is available upon request.",
"fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.",
"(2018) , but they too did not investigate in what ways these two classes differ.",
"This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.",
"We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.",
"We find a number of distinguishing features.",
"First, we confirm the results of Avarikioti et al.",
"(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.",
"Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).",
"This indicates that the two classes are different in terms of their syntactic structure.",
"Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).",
"The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.",
"Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.",
"This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.",
"By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.",
"After discussing previous works in Section 2, we detail the datasets used in Section 3.",
"Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).",
"Section 6 presents additional experiments, which explore cross-domain classification.",
"We further analyze and discuss the findings in Section 7.",
"Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.",
"For example, Biryukov et al.",
"(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.",
"Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.",
"While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.",
"Some works directly addressed a specific type of illegality and a particular communication context.",
"Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.",
"The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.",
"Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.",
"Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.",
"Al Nabki et al.",
"(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.",
"For some of the categories, legal and illegal activities are distinguished.",
"However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.",
"Al Nabki et al.",
"(2019) extended the dataset to form DUTA-10K, which we use here.",
"Their results show that 20% of the hidden services correspond to \"suspicious\" activities.",
"The analysis was conducted using the text classifier presented in Al Nabki et al.",
"(2017) and manual verification.",
"Recently, Avarikioti et al.",
"(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.",
"The experiments were performed on a newly crawled corpus obtained by recursive search.",
"The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.",
"Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.",
"They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.",
"Using the dataset of Al Nabki et al.",
"(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.",
"Datasets Used Onion corpus.",
"We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .",
"We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.",
"These websites advertise and sell drugs, often to international customers.",
"While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.",
"These pages are directed by sellers to their customers.",
"eBay corpus.",
"As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.",
"eBay is one of the largest hosting sites for retail sellers of various goods.",
"Our corpus contains 118 item descriptions, each consisting of more than one sentence.",
"Item descriptions vary in price, item sold and seller.",
"The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.",
"For example, where many sell the same product, only one example was added to the corpus.",
"Search queries also included filtering for price, so that each query resulted with different items.",
"Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.",
"Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.",
"Cleaning.",
"As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.",
"HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.",
"We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).",
"We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.",
"\"Showing all 9 results\").",
"Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.",
"Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .",
"While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.",
"This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).",
"We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.",
"Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).",
"Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.",
"The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.",
"We refer to this measure as \"self-distance\".",
"Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.",
"pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.",
"Similar results are obtained using Variational distance, and are omitted for brevity.",
"These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.",
"Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.",
"In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.",
"Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.",
"Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.",
"5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .",
"For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.",
"The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.",
"We also report the standard error for each average.",
"According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.",
"For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.",
"Onion named entities is comparable and relatively low.",
"However, sites selling legal drugs on Onion have a much higher Wikification percentage.",
"Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.",
"However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.",
"These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.",
"In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.",
"Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.",
"In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.",
"6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.",
"Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.",
"Another goal of the classification task is to confirm our finding that the domains are distinguishable.",
"Experimental setup.",
"We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.",
"We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.",
"Model.",
"To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.",
"This simple classifier features frequently in work on text classification in the Darknet.",
"• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.",
"• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .",
"BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.",
"The word vectors are not updated during training.",
"Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).",
"• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).",
"• attention: we replace the word representations with contextualized pre-trained representations from ELMo .",
"We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.",
"This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .",
"For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .",
"We use the AllenNLP library 8 to implement the neural network classifiers.",
"7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.",
"In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).",
"Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).",
"For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.",
"Settings.",
"We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.",
"• Training and testing on Legal Onion vs.",
"Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.",
"Results The accuracy scores for the different classifiers and settings are reported in Table 3 .",
"confirmed by the drop in accuracy when content words are removed.",
"However, in this setting (drop cont.",
"), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.",
"Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.",
"This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.",
"It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).",
"Legal vs. illegal drugs.",
"Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.",
"However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).",
"This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.",
"Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.",
"The forums contain user-written text in various topics.",
"Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.",
"As this domain contains usergenerated content, it is more varied and noisy.",
"Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.",
"We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.",
"• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.",
"This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.",
"Results Accuracy scores are reported in Table 4 .",
"Legal vs. illegal forums.",
"Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.",
"However, the SVM model achieves an accuracy of 85.3% in the full setting.",
"Good performance is presented by this model even in the cases where the content words are dropped (drop.",
"cont.)",
"or replaced by part-of-speech tags (pos cont.",
"), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.",
"Cross-domain evaluation.",
"Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.",
"This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.",
"This also shows that illegal texts in Tor share common properties regardless of topical category.",
"The much lower results obtained by the models where content words are dropped (drop cont.)",
"or converted to POS tags (pos cont.",
"), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.",
"Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.",
"Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.",
"This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.",
"This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.",
"Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.",
"Analysis of texts from the datasets.",
"Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.",
"Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.",
"Analysis of manipulated texts.",
"Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"",
"in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.",
"However, the SVM model does manage to distinguish between the texts even in this setting.",
"Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.",
"Analysis of learned feature weights.",
"As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.",
"Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.",
"Illegal Onion classification in this setting.",
"Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.",
"Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.",
"Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.",
"Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).",
"Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.",
"We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"6.1",
"6.2",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Datasets Used",
"Domain Differences",
"Vocabulary Differences",
"Differences in Named Entities",
"Classification Experiments",
"Results",
"Illegality Detection Across Domains",
"Experimental setup",
"Results",
"Discussion",
"Conclusion"
]
}
|
GEM-SciDuet-train-54#paper-1096#slide-4
|
Data DUTA 10K
|
Dataset of 10367 Onion Services text pages [Al Nabki et al., 2019].
Classified by needs of Spanish law enforcement agencies.
20% categorized as illegal and 48% as legal (32% unavailable).
Of the illegal websites, 23% concern illegal drugs.
|
Dataset of 10367 Onion Services text pages [Al Nabki et al., 2019].
Classified by needs of Spanish law enforcement agencies.
20% categorized as illegal and 48% as legal (32% unavailable).
Of the illegal websites, 23% concern illegal drugs.
|
[] |
GEM-SciDuet-train-54#paper-1096#slide-5
|
1096
|
The Language of Legal and Illegal Activity on the Darknet
|
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.",
"The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.",
"Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .",
"Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.",
"However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.",
"In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.",
"Our data is available upon request.",
"fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.",
"(2018) , but they too did not investigate in what ways these two classes differ.",
"This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.",
"We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.",
"We find a number of distinguishing features.",
"First, we confirm the results of Avarikioti et al.",
"(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.",
"Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).",
"This indicates that the two classes are different in terms of their syntactic structure.",
"Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).",
"The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.",
"Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.",
"This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.",
"By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.",
"After discussing previous works in Section 2, we detail the datasets used in Section 3.",
"Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).",
"Section 6 presents additional experiments, which explore cross-domain classification.",
"We further analyze and discuss the findings in Section 7.",
"Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.",
"For example, Biryukov et al.",
"(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.",
"Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.",
"While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.",
"Some works directly addressed a specific type of illegality and a particular communication context.",
"Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.",
"The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.",
"Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.",
"Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.",
"Al Nabki et al.",
"(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.",
"For some of the categories, legal and illegal activities are distinguished.",
"However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.",
"Al Nabki et al.",
"(2019) extended the dataset to form DUTA-10K, which we use here.",
"Their results show that 20% of the hidden services correspond to \"suspicious\" activities.",
"The analysis was conducted using the text classifier presented in Al Nabki et al.",
"(2017) and manual verification.",
"Recently, Avarikioti et al.",
"(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.",
"The experiments were performed on a newly crawled corpus obtained by recursive search.",
"The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.",
"Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.",
"They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.",
"Using the dataset of Al Nabki et al.",
"(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.",
"Datasets Used Onion corpus.",
"We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .",
"We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.",
"These websites advertise and sell drugs, often to international customers.",
"While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.",
"These pages are directed by sellers to their customers.",
"eBay corpus.",
"As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.",
"eBay is one of the largest hosting sites for retail sellers of various goods.",
"Our corpus contains 118 item descriptions, each consisting of more than one sentence.",
"Item descriptions vary in price, item sold and seller.",
"The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.",
"For example, where many sell the same product, only one example was added to the corpus.",
"Search queries also included filtering for price, so that each query resulted with different items.",
"Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.",
"Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.",
"Cleaning.",
"As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.",
"HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.",
"We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).",
"We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.",
"\"Showing all 9 results\").",
"Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.",
"Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .",
"While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.",
"This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).",
"We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.",
"Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).",
"Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.",
"The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.",
"We refer to this measure as \"self-distance\".",
"Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.",
"pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.",
"Similar results are obtained using Variational distance, and are omitted for brevity.",
"These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.",
"Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.",
"In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.",
"Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.",
"Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.",
"5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .",
"For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.",
"The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.",
"We also report the standard error for each average.",
"According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.",
"For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.",
"Onion named entities is comparable and relatively low.",
"However, sites selling legal drugs on Onion have a much higher Wikification percentage.",
"Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.",
"However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.",
"These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.",
"In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.",
"Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.",
"In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.",
"6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.",
"Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.",
"Another goal of the classification task is to confirm our finding that the domains are distinguishable.",
"Experimental setup.",
"We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.",
"We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.",
"Model.",
"To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.",
"This simple classifier features frequently in work on text classification in the Darknet.",
"• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.",
"• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .",
"BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.",
"The word vectors are not updated during training.",
"Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).",
"• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).",
"• attention: we replace the word representations with contextualized pre-trained representations from ELMo .",
"We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.",
"This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .",
"For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .",
"We use the AllenNLP library 8 to implement the neural network classifiers.",
"7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.",
"In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).",
"Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).",
"For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.",
"Settings.",
"We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.",
"• Training and testing on Legal Onion vs.",
"Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.",
"Results The accuracy scores for the different classifiers and settings are reported in Table 3 .",
"confirmed by the drop in accuracy when content words are removed.",
"However, in this setting (drop cont.",
"), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.",
"Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.",
"This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.",
"It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).",
"Legal vs. illegal drugs.",
"Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.",
"However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).",
"This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.",
"Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.",
"The forums contain user-written text in various topics.",
"Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.",
"As this domain contains usergenerated content, it is more varied and noisy.",
"Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.",
"We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.",
"• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.",
"This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.",
"Results Accuracy scores are reported in Table 4 .",
"Legal vs. illegal forums.",
"Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.",
"However, the SVM model achieves an accuracy of 85.3% in the full setting.",
"Good performance is presented by this model even in the cases where the content words are dropped (drop.",
"cont.)",
"or replaced by part-of-speech tags (pos cont.",
"), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.",
"Cross-domain evaluation.",
"Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.",
"This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.",
"This also shows that illegal texts in Tor share common properties regardless of topical category.",
"The much lower results obtained by the models where content words are dropped (drop cont.)",
"or converted to POS tags (pos cont.",
"), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.",
"Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.",
"Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.",
"This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.",
"This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.",
"Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.",
"Analysis of texts from the datasets.",
"Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.",
"Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.",
"Analysis of manipulated texts.",
"Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"",
"in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.",
"However, the SVM model does manage to distinguish between the texts even in this setting.",
"Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.",
"Analysis of learned feature weights.",
"As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.",
"Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.",
"Illegal Onion classification in this setting.",
"Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.",
"Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.",
"Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.",
"Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).",
"Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.",
"We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"6.1",
"6.2",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Datasets Used",
"Domain Differences",
"Vocabulary Differences",
"Differences in Named Entities",
"Classification Experiments",
"Results",
"Illegality Detection Across Domains",
"Experimental setup",
"Results",
"Discussion",
"Conclusion"
]
}
|
GEM-SciDuet-train-54#paper-1096#slide-5
|
Data Control Data eBay
|
Product descriptions acquired by searching drug-related terms.
Do not sell actual drugs, but rather drug-related products.
3 Layers Chip Style Herb Herbal Tobacco Grinder Weed Grinders
Type : Tobacco Crusher
|
Product descriptions acquired by searching drug-related terms.
Do not sell actual drugs, but rather drug-related products.
3 Layers Chip Style Herb Herbal Tobacco Grinder Weed Grinders
Type : Tobacco Crusher
|
[] |
GEM-SciDuet-train-54#paper-1096#slide-6
|
1096
|
The Language of Legal and Illegal Activity on the Darknet
|
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
|
{
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189
],
"paper_content_text": [
"Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.",
"The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.",
"Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .",
"Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.",
"However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.",
"In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.",
"Our data is available upon request.",
"fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.",
"(2018) , but they too did not investigate in what ways these two classes differ.",
"This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.",
"We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.",
"We find a number of distinguishing features.",
"First, we confirm the results of Avarikioti et al.",
"(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.",
"Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).",
"This indicates that the two classes are different in terms of their syntactic structure.",
"Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).",
"The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.",
"Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.",
"This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.",
"By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.",
"After discussing previous works in Section 2, we detail the datasets used in Section 3.",
"Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).",
"Section 6 presents additional experiments, which explore cross-domain classification.",
"We further analyze and discuss the findings in Section 7.",
"Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.",
"For example, Biryukov et al.",
"(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.",
"Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.",
"While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.",
"Some works directly addressed a specific type of illegality and a particular communication context.",
"Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.",
"The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.",
"Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.",
"Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.",
"Al Nabki et al.",
"(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.",
"For some of the categories, legal and illegal activities are distinguished.",
"However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.",
"Al Nabki et al.",
"(2019) extended the dataset to form DUTA-10K, which we use here.",
"Their results show that 20% of the hidden services correspond to \"suspicious\" activities.",
"The analysis was conducted using the text classifier presented in Al Nabki et al.",
"(2017) and manual verification.",
"Recently, Avarikioti et al.",
"(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.",
"The experiments were performed on a newly crawled corpus obtained by recursive search.",
"The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.",
"Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.",
"They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.",
"Using the dataset of Al Nabki et al.",
"(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.",
"Datasets Used Onion corpus.",
"We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .",
"We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.",
"These websites advertise and sell drugs, often to international customers.",
"While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.",
"These pages are directed by sellers to their customers.",
"eBay corpus.",
"As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.",
"eBay is one of the largest hosting sites for retail sellers of various goods.",
"Our corpus contains 118 item descriptions, each consisting of more than one sentence.",
"Item descriptions vary in price, item sold and seller.",
"The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.",
"For example, where many sell the same product, only one example was added to the corpus.",
"Search queries also included filtering for price, so that each query resulted with different items.",
"Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.",
"Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.",
"Cleaning.",
"As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.",
"HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.",
"We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).",
"We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.",
"\"Showing all 9 results\").",
"Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.",
"Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .",
"While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.",
"This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).",
"We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.",
"Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).",
"Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.",
"The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.",
"We refer to this measure as \"self-distance\".",
"Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.",
"pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.",
"Similar results are obtained using Variational distance, and are omitted for brevity.",
"These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.",
"Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.",
"In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.",
"Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.",
"Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.",
"5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .",
"For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.",
"The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.",
"We also report the standard error for each average.",
"According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.",
"For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.",
"Onion named entities is comparable and relatively low.",
"However, sites selling legal drugs on Onion have a much higher Wikification percentage.",
"Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.",
"However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.",
"These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.",
"In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.",
"Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.",
"In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.",
"6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.",
"Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.",
"Another goal of the classification task is to confirm our finding that the domains are distinguishable.",
"Experimental setup.",
"We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.",
"We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.",
"Model.",
"To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.",
"This simple classifier features frequently in work on text classification in the Darknet.",
"• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.",
"• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .",
"BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.",
"The word vectors are not updated during training.",
"Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).",
"• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).",
"• attention: we replace the word representations with contextualized pre-trained representations from ELMo .",
"We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.",
"This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .",
"For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .",
"We use the AllenNLP library 8 to implement the neural network classifiers.",
"7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.",
"In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).",
"Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).",
"For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.",
"Settings.",
"We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.",
"• Training and testing on Legal Onion vs.",
"Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.",
"Results The accuracy scores for the different classifiers and settings are reported in Table 3 .",
"confirmed by the drop in accuracy when content words are removed.",
"However, in this setting (drop cont.",
"), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.",
"Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.",
"This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.",
"It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).",
"Legal vs. illegal drugs.",
"Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.",
"However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).",
"This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.",
"Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.",
"The forums contain user-written text in various topics.",
"Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.",
"As this domain contains usergenerated content, it is more varied and noisy.",
"Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.",
"We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.",
"• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.",
"This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.",
"Results Accuracy scores are reported in Table 4 .",
"Legal vs. illegal forums.",
"Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.",
"However, the SVM model achieves an accuracy of 85.3% in the full setting.",
"Good performance is presented by this model even in the cases where the content words are dropped (drop.",
"cont.)",
"or replaced by part-of-speech tags (pos cont.",
"), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.",
"Cross-domain evaluation.",
"Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.",
"This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.",
"This also shows that illegal texts in Tor share common properties regardless of topical category.",
"The much lower results obtained by the models where content words are dropped (drop cont.)",
"or converted to POS tags (pos cont.",
"), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.",
"Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.",
"Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.",
"This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.",
"This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.",
"Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.",
"Analysis of texts from the datasets.",
"Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.",
"Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.",
"Analysis of manipulated texts.",
"Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"",
"in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.",
"However, the SVM model does manage to distinguish between the texts even in this setting.",
"Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.",
"Analysis of learned feature weights.",
"As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.",
"Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.",
"Illegal Onion classification in this setting.",
"Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.",
"Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.",
"Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.",
"Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).",
"Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.",
"We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes."
]
}
|
{
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"6.1",
"6.2",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Datasets Used",
"Domain Differences",
"Vocabulary Differences",
"Differences in Named Entities",
"Classification Experiments",
"Results",
"Illegality Detection Across Domains",
"Experimental setup",
"Results",
"Discussion",
"Conclusion"
]
}
|
GEM-SciDuet-train-54#paper-1096#slide-6
|
Data
|
Public Web Dark Web
|
Public Web Dark Web
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.