Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P11-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:47:16.648019Z"
},
"title": "A Pilot Study of Opinion Summarization in Conversations",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a pilot study of opinion summarization on conversations. We create a corpus containing extractive and abstractive summaries of speaker's opinion towards a given topic using 88 telephone conversations. We adopt two methods to perform extractive summarization. The first one is a sentence-ranking method that linearly combines scores measured from different aspects including topic relevance, subjectivity, and sentence importance. The second one is a graph-based method, which incorporates topic and sentiment information, as well as additional information about sentence-to-sentence relations extracted based on dialogue structure. Our evaluation results show that both methods significantly outperform the baseline approach that extracts the longest utterances. In particular, we find that incorporating dialogue structure in the graph-based method contributes to the improved system performance.",
"pdf_parse": {
"paper_id": "P11-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a pilot study of opinion summarization on conversations. We create a corpus containing extractive and abstractive summaries of speaker's opinion towards a given topic using 88 telephone conversations. We adopt two methods to perform extractive summarization. The first one is a sentence-ranking method that linearly combines scores measured from different aspects including topic relevance, subjectivity, and sentence importance. The second one is a graph-based method, which incorporates topic and sentiment information, as well as additional information about sentence-to-sentence relations extracted based on dialogue structure. Our evaluation results show that both methods significantly outperform the baseline approach that extracts the longest utterances. In particular, we find that incorporating dialogue structure in the graph-based method contributes to the improved system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Both sentiment analysis (opinion recognition) and summarization have been well studied in recent years in the natural language processing (NLP) community. Most of the previous work on sentiment analysis has been conducted on reviews. Summarization has been applied to different genres, such as news articles, scientific articles, and speech domains including broadcast news, meetings, conversations and lectures. However, opinion summarization has not been explored much. This can be useful for many domains, especially for processing the increasing amount of conversation recordings (telephone conversations, customer service, round-table discussions or interviews in broadcast programs) where we often need to find a person's opinion or attitude, for example, \"how does the speaker think about capital punishment and why?\". This kind of questions can be treated as a topic-oriented opinion summarization task. Opinion summarization was run as a pilot task in Text Analysis Conference (TAC) in 2008. The task was to produce summaries of opinions on specified targets from a set of blog documents. In this study, we investigate this problem using spontaneous conversations. The problem is defined as, given a conversation and a topic, a summarization system needs to generate a summary of the speaker's opinion towards the topic. This task is built upon opinion recognition and topic or query based summarization. However, this problem is challenging in that: (a) Summarization in spontaneous speech is more difficult than well structured text (Mckeown et al., 2005) , because speech is always less organized and has recognition errors when using speech recognition output; (b) Sentiment analysis in dialogues is also much harder because of the genre difference compared to other domains like product reviews or news resources, as reported in (Raaijmakers et al., 2008) ; (c) In conversational speech, information density is low and there are often off topic discussions, therefore presenting a need to identify utterances that are relevant to the topic.",
"cite_spans": [
{
"start": 1544,
"end": 1566,
"text": "(Mckeown et al., 2005)",
"ref_id": "BIBREF13"
},
{
"start": 1843,
"end": 1869,
"text": "(Raaijmakers et al., 2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we perform an exploratory study on opinion summarization in conversations. We compare two unsupervised methods that have been widely used in extractive summarization: sentenceranking and graph-based methods. Our system attempts to incorporate more information about topic relevancy and sentiment scores. Furthermore, in the graph-based method, we propose to better incorporate the dialogue structure information in the graph in order to select salient summary utterances. We have created a corpus of reasonable size in this study. Our experimental results show that both methods achieve better results compared to the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 briefly discusses related work. Section 3 describes the corpus and annotation scheme we used. We explain our opinion-oriented conversation summarization system in Section 4 and present experimental results and analysis in Section 5. Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research in document summarization has been well established over the past decades. Many tasks have been defined such as single-document summarization, multi-document summarization, and querybased summarization. Previous studies have used various domains, including news articles, scientific articles, web documents, reviews. Recently there is an increasing research interest in speech summarization, such as conversational telephone speech (Zhu and Penn, 2006; Zechner, 2002) , broadcast news Lin et al., 2009) , lectures (Zhang et al., 2007; Furui et al., 2004) , meetings (Murray et al., 2005; Xie and Liu, 2010) , voice mails (Koumpis and Renals, 2005) . In general speech domains seem to be more difficult than well written text for summarization. In previous work, unsupervised methods like Maximal Marginal Relevance (MMR), Latent Semantic Analysis (LSA), and supervised methods that cast the extraction problem as a binary classification task have been adopted. Prior research has also explored using speech specific information, including prosodic features, dialog structure, and speech recognition confidence.",
"cite_spans": [
{
"start": 441,
"end": 461,
"text": "(Zhu and Penn, 2006;",
"ref_id": "BIBREF31"
},
{
"start": 462,
"end": 476,
"text": "Zechner, 2002)",
"ref_id": "BIBREF28"
},
{
"start": 494,
"end": 511,
"text": "Lin et al., 2009)",
"ref_id": "BIBREF30"
},
{
"start": 523,
"end": 543,
"text": "(Zhang et al., 2007;",
"ref_id": "BIBREF29"
},
{
"start": 544,
"end": 563,
"text": "Furui et al., 2004)",
"ref_id": "BIBREF2"
},
{
"start": 575,
"end": 596,
"text": "(Murray et al., 2005;",
"ref_id": "BIBREF15"
},
{
"start": 597,
"end": 615,
"text": "Xie and Liu, 2010)",
"ref_id": "BIBREF27"
},
{
"start": 630,
"end": 656,
"text": "(Koumpis and Renals, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In order to provide a summary over opinions, we need to find out which utterances in the conversation contain opinion. Most previous work in senti-ment analysis has focused on reviews (Pang and Lee, 2004; Popescu and Etzioni, 2005; Ng et al., 2006) and news resources (Wiebe and Riloff, 2005) . Many kinds of features are explored, such as lexical features (unigram, bigram and trigram), part-of-speech tags, dependency relations. Most of prior work used classification methods such as naive Bayes or SVMs to perform the polarity classification or opinion detection. Only a handful studies have used conversational speech for opinion recognition (Murray and Carenini, 2009; Raaijmakers et al., 2008) , in which some domain-specific features are utilized such as structural features and prosodic features.",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "(Pang and Lee, 2004;",
"ref_id": "BIBREF18"
},
{
"start": 205,
"end": 231,
"text": "Popescu and Etzioni, 2005;",
"ref_id": "BIBREF20"
},
{
"start": 232,
"end": 248,
"text": "Ng et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 268,
"end": 292,
"text": "(Wiebe and Riloff, 2005)",
"ref_id": "BIBREF23"
},
{
"start": 646,
"end": 673,
"text": "(Murray and Carenini, 2009;",
"ref_id": "BIBREF14"
},
{
"start": 674,
"end": 699,
"text": "Raaijmakers et al., 2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is also related to question answering (QA), especially opinion question answering. (Stoyanov et al., 2005 ) applies a subjectivity filter based on traditional QA systems to generate opinionated answers. (Balahur et al., 2010) answers some specific opinion questions like \"Why do people criticize Richard Branson?\" by retrieving candidate sentences using traditional QA methods and selecting the ones with the same polarity as the question. Our work is different in that we are not going to answer specific opinion questions, instead, we provide a summary on the speaker's opinion towards a given topic.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Stoyanov et al., 2005",
"ref_id": "BIBREF22"
},
{
"start": 212,
"end": 234,
"text": "(Balahur et al., 2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There exists some work on opinion summarization. For example, (Hu and Liu, 2004; Nishikawa et al., 2010) have explored opinion summarization in review domain, and (Paul et al., 2010) summarizes contrastive viewpoints in opinionated text. However, opinion summarization in spontaneous conversation is seldom studied.",
"cite_spans": [
{
"start": 62,
"end": 80,
"text": "(Hu and Liu, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 81,
"end": 104,
"text": "Nishikawa et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 163,
"end": 182,
"text": "(Paul et al., 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Though there are many annotated data sets for the research of speech summarization and sentiment analysis, there is no corpus available for opinion summarization on spontaneous speech. Thus for this study, we create a new pilot data set using a subset of the Switchboard corpus (Godfrey and Holliman, 1997) . 1 These are conversational telephone speech between two strangers that were assigned a topic to talk about for around 5 minutes. They were told to find the opinions of the other person. There are 70 topics in total. From the Switchboard cor-pus, we selected 88 conversations from 6 topics for this study. Table 1 lists the number of conversations in each topic, their average length (measured in the unit of dialogue acts (DA)) and standard deviation of length. We recruited 3 annotators that are all undergraduate computer science students. From the 88 conversations, we selected 18 (3 from each topic) and let all three annotators label them in order to study inter-annotator agreement. The rest of the conversations has only one annotation.",
"cite_spans": [
{
"start": 278,
"end": 306,
"text": "(Godfrey and Holliman, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 309,
"end": 310,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 614,
"end": 621,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "The annotators have access to both conversation transcripts and audio files. For each conversation, the annotator writes an abstractive summary of up to 100 words for each speaker about his/her opinion or attitude on the given topic. They were told to use the words in the original transcripts if possible. Then the annotator selects up to 15 DAs (no minimum limit) in the transcripts for each speaker, from which their abstractive summary is derived. The selected DAs are used as the human generated extractive summary. In addition, the annotator is asked to select an overall opinion towards the topic for each speaker among five categories: strongly support, somewhat support, neutral, somewhat against, strongly against. Therefore for each conversation, we have an abstractive summary, an extractive summary, and an overall opinion for each speaker. The following shows an example of such annotation for speaker B in a dialogue about \"capital punishment\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "[Extractive Summary] I think I've seen some statistics that say that, uh, it's more expensive to kill somebody than to keep them in prison for life.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "committing them mostly is, you know, either crimes of passion or at the moment or they think they're not going to get caught but you also have to think whether it's worthwhile on the individual basis, for example, someone like, uh, jeffrey dahlmer, by putting him in prison for life, there is still a possibility that he will get out again. I don't think he could ever redeem himself, but if you look at who gets accused and who are the ones who actually get executed, it's very racially related -and ethnically related",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "[Abstractive Summary] B is against capital punishment except under certain circumstances. B finds that crimes deserving of capital punishment are \"crimes of the moment\" and as a result feels that capital punishment is not an effective deterrent. however, B also recognizes that on an individual basis some criminals can never \"redeem\" themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "[Overall Opinion] Somewhat against Table 2 shows the compression ratio of the extractive summaries and abstractive summaries as well as their standard deviation. Because in conversations, utterance length varies a lot, we use words as units when calculating the compression ratio.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "avg ratio stdev extractive summaries 0.26 0.13 abstractive summaries 0.13 0.06 Table 2 : Compression ratio and standard deviation of extractive and abstractive summaries.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "We measured the inter-annotator agreement among the three annotators for the 18 conversations (each has two speakers, thus 36 \"documents\" in total). Results are shown in Table 3 . For the extractive or abstractive summaries, we use ROUGE scores (Lin, 2004) , a metric used to evaluate automatic summarization performance, to measure the pairwise agreement of summaries from different annotators. ROUGE F-scores are shown in the table for different matches, unigram (R-1), bigram (R-2), and longest subsequence (R-L). For the overall opinion category, since it is a multiclass label (not binary decision), we use Krippendorff's \u03b1 coefficient to measure human agreement, and the difference function for interval data: \u03b4 2 ck = (c \u2212 k) 2 (where c, k are the interval values, on a scale of 1 to 5 corresponding to the five categories for the overall opinion).",
"cite_spans": [
{
"start": 245,
"end": 256,
"text": "(Lin, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "We notice that the inter-annotator agreement for extractive summaries is comparable to other speech extractive summaries R-1 0.61 R-2 0.52 R-L 0.61 abstractive summaries R-1 0.32 R-2 0.13 R-L 0.25 overall opinion \u03b1 = 0.79 Table 3 : Inter-annotator agreement for extractive and abstractive summaries, and overall opinion. summary annotation (Liu and Liu, 2008) . The agreement on abstractive summaries is much lower than extractive summaries, which is as expected.",
"cite_spans": [
{
"start": 340,
"end": 359,
"text": "(Liu and Liu, 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "Even for the same opinion or sentence, annotators use different words in the abstractive summaries. The agreement for the overall opinion annotation is similar to other opinion/emotion studies (Wilson, 2008b) , but slightly lower than the level recommended by Krippendorff for reliable data (\u03b1 = 0.8) (Hayes and Krippendorff, 2007) , which shows it is even difficult for humans to determine what opinion a person holds (support or against something). Often human annotators have different interpretations about the same sentence, and a speaker's opinion/attitude is sometimes ambiguous. Therefore this also demonstrates that it is more appropriate to provide a summary rather than a simple opinion category to answer questions about a person's opinion towards something.",
"cite_spans": [
{
"start": 193,
"end": 208,
"text": "(Wilson, 2008b)",
"ref_id": "BIBREF26"
},
{
"start": 301,
"end": 331,
"text": "(Hayes and Krippendorff, 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "3"
},
{
"text": "Automatic summarization can be divided into extractive summarization and abstractive summarization. Extractive summarization selects sentences from the original documents to form a summary; whereas abstractive summarization requires generation of new sentences that represent the most salient content in the original documents like humans do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Summarization Methods",
"sec_num": "4"
},
{
"text": "Often extractive summarization is used as the first step to generate abstractive summary. As a pilot study for the problem of opinion summarization in conversations, we treat this problem as an extractive summarization task. This section describes two approaches we have explored in generating extractive summaries. The first one is a sentence-ranking method, in which we measure the salience of each sentence according to a linear com-bination of scores from several dimensions. The second one is a graph-based method, which incorporates the dialogue structure in ranking. We choose to investigate these two methods since they have been widely used in text and speech summarization, and perform competitively. In addition, they do not require a large labeled data set for modeling training, as needed in some classification or feature based summarization approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Summarization Methods",
"sec_num": "4"
},
{
"text": "In this method, we use Equation 1 to assign a score to each DA s, and select the most highly ranked ones until the length constriction is satisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "score(s) = \u03bb sim sim(s, D) + \u03bb rel REL(s, topic) +\u03bb sent sentiment(s) + \u03bb len length(s) i \u03bb i = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "\u2022 sim(s, D) is the cosine similarity between DA s and all the utterances in the dialogue from the same speaker, D. It measures the relevancy of s to the entire dialogue from the target speaker. This score is used to represent the salience of the DA. It has been shown to be an important indicator in summarization for various domains. For cosine similarity measure, we use TF*IDF (term frequency, inverse document frequency) term weighting. The IDF values are obtained using the entire Switchboard corpus, treating each conversation as a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "\u2022 REL(s, topic) measures the topic relevance of DA s. It is the sum of the topic relevance of all the words in the DA. We only consider the content words for this measure. They are identified using TreeTagger toolkit. 2 To measure the relevance of a word to a topic, we use Pairwise Mutual Information (PMI):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P M I(w, topic) = log 2 p(w&topic) p(w)p(topic)",
"eq_num": "(2)"
}
],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "where all the statistics are collected from the Switchboard corpus: p(w&topic) denotes the probability that word w appears in a dialogue of topic t, and p(w) is the probability of w appearing in a dialogue of any topic. Since our goal is to rank DAs in the same dialog, and the topic is the same for all the DAs, we drop p(topic) when calculating PMI scores. Because the value of P M I(w, topic) is negative, we transform it into a positive one (denoted by P M I + (w, topic)) by adding the absolute value of the minimum value. The final relevance score of each sentence is normalized to [0, 1] using linear normalization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "REL orig (s, topic) = w\u2208s P M I + (w, topic) REL(s, topic) = REL orig (s, topic) \u2212 M in M ax \u2212 M in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "\u2022 sentiment(s) indicates the probability that utterance s contains opinion. To obtain this, we trained a maximum entropy classifier with a bag-of-words model using a combination of data sets from several domains, including movie data (Pang and Lee, 2004) , news articles from MPQA corpus (Wilson and Wiebe, 2003) , and meeting transcripts from AMI corpus (Wilson, 2008a ). Each sentence (or DA) in these corpora is annotated as \"subjective\" or \"objective\". We use each utterance's probability of being \"subjective\" predicted by the classifier as its sentiment score.",
"cite_spans": [
{
"start": 234,
"end": 254,
"text": "(Pang and Lee, 2004)",
"ref_id": "BIBREF18"
},
{
"start": 288,
"end": 312,
"text": "(Wilson and Wiebe, 2003)",
"ref_id": "BIBREF24"
},
{
"start": 355,
"end": 369,
"text": "(Wilson, 2008a",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "\u2022 length(s) is the length of the utterance. This score can effectively penalize the short sentences which typically do not contain much important content, especially the backchannels that appear frequently in dialogues. We also perform linear normalization such that the final value lies in [0, 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.1"
},
{
"text": "Graph-based methods have been widely used in document summarization. In this approach, a document is modeled as an adjacency matrix, where each node represents a sentence, and the weight of the edge between each pair of sentences is their similarity (cosine similarity is typically used). An iterative process is used until the scores for the nodes converge. Previous studies (Erkan and Radev, 2004) showed that this method can effectively extract important sentences from documents. The basic framework we use in this study is similar to the query-based graph summarization system in (Zhao et al., 2009) . We also consider sentiment and topic relevance information, and propose to incorporate information obtained from dialog structure in this framework. The score for a DA s is based on its content similarity with all other DAs in the dialogue, the connection with other DAs based on the dialogue structure, the topic relevance, and its subjectivity, that is:",
"cite_spans": [
{
"start": 376,
"end": 399,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF1"
},
{
"start": 585,
"end": 604,
"text": "(Zhao et al., 2009)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "score(s) = \u03bb sim v\u2208C sim(s, v) z\u2208C sim(z, v) score(v) +\u03bb rel REL(s, topic) z\u2208C REL(z, topic) +\u03bb sent sentiment(s) z\u2208C sentiment(z) +\u03bb adj v\u2208C ADJ(s, v) z\u2208C ADJ(z, v) score(v) i \u03bb i = 1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "where C is the set of all DAs in the dialogue; REL(s, topic) and sentiment(s) are the same as those in the above sentence ranking method; sim(s, v) is the cosine similarity between two DAs s and v. In addition to the standard connection between two DAs with an edge weight sim(s, v), we introduce new connections ADJ(s, v) to model dialog structure. It is a directed edge from s to v, defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "\u2022 If s and v are from the same speaker and within the same turn, there is an edge from s to v and an edge from v to s with weight 1/dis(s, v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "(ADJ(s, v) = ADJ(v, s) = 1/dis(s, v)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "where dis(s, v) is the distance between s and v, measured based on their DA indices. This way the DAs in the same turn can reinforce each other. For example, if we consider that one DA is important, then the other DAs in the same turn are also important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "\u2022 If s and v are from the same speaker, and separated only by one DA from another speaker with length less than 3 words (usually backchannel), there is an edge from s to v as well as an edge from v to s with weight 1 (ADJ(s, v) = ADJ(v, s) = 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "\u2022 If s and v form a question-answer pair from two speakers, then there is an edge from question s to answer v with weight 1 (ADJ(s, v) = 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "We use a simple rule-based method to determine question-answer pairs -sentence s has question marks or contains \"wh-word\" (i.e., \"what, how, why\"), and sentence v is the immediately following one. The motivation for adding this connection is, if the score of a question sentence is high, then the answer's score is also boosted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "\u2022 If s and v form an agreement or disagreement pair, then there is an edge from v to s with weight 1 (ADJ(v, s) = 1). This is also determined by simple rules: sentence v contains the word \"agree\" or \"disagree\", s is the previous sentence, and from a different speaker. The reason for adding this is similar to the above question-answer pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "\u2022 If there are multiple edges generated from the above steps between two nodes, then we use the highest weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "Since we are using a directed graph for the sentence connections to model dialog structure, the resulting adjacency matrix is asymmetric. This is different from the widely used graph methods for summarization. Also note that in the first sentence ranking method or the basic graph methods, summarization is conducted for each speaker separately. Utterances from one speaker have no influence on the summary decision for the other speaker. Here in our proposed graph-based method, we introduce connections between the two speakers, so that the adjacency pairs between them can be utilized to extract salient utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based Summarization",
"sec_num": "4.2"
},
{
"text": "The 18 conversations annotated by all 3 annotators are used as test set, and the rest of 70 conversations are used as development set to tune the parameters (determining the best combination weights). In preprocessing we applied word stemming. We perform extractive summarization using different word compression ratios (ranging from 10% to 25%). We use human annotated dialogue acts (DA) as the extraction units. The system-generated summaries are compared to human annotated extractive and abstractive summaries. We use ROUGE as the evaluation metrics for summarization performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "We compare our methods to two systems. The first one is a baseline system, where we select the longest utterances for each speaker. This has been shown to be a relatively strong baseline for speech summarization (Gillick et al., 2009) . The second one is human performance. We treat each annotator's extractive summary as a system summary, and compare to the other two annotators' extractive and abstractive summaries. This can be considered as the upper bound of our system performance.",
"cite_spans": [
{
"start": 212,
"end": 234,
"text": "(Gillick et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "From the development set, we used the grid search method to obtain the best combination weights for the two summarization methods. In the sentenceranking method, the best parameters found on the development set are \u03bb sim = 0, \u03bb rel = 0.3, \u03bb sent = 0.3, \u03bb len = 0.4. It is surprising to see that the similarity score is not useful for this task. The possible reason is, in Switchboard conversations, what people talk about is diverse and in many cases only topic words (except stopwords) appear more than once. In addition, REL score is already able to catch the topic relevancy of the sentence. Thus, the similarity score is redundant here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In the graph-based method, the best parameters are \u03bb sim = 0, \u03bb adj = 0.3, \u03bb rel = 0.4, \u03bb sent = 0.3. The similarity between each pair of utterances is also not useful, which can be explained with similar reasons as in the sentence-ranking method. This is different from graph-based summarization systems for text domains. A similar finding has also been shown in (Garg et al., 2009) , where similarity be- tween utterances does not perform well in conversation summarization. Figure 1 shows the ROUGE-1 F-scores comparing to human extractive and abstractive summaries for different compression ratios. Similar patterns are observed for other ROUGE scores such as ROUGE-2 or ROUGE-L, therefore they are not shown here. Both methods improve significantly over the baseline approach. There is relatively less improvement using a higher compression ratio, compared to a lower one. This is reasonable because when the compression ratio is low, the most salient utterances are not necessarily the longest ones, thus using more information sources helps better identify important sentences; but when the compression ratio is higher, longer utterances are more likely to be selected since they contain more content.",
"cite_spans": [
{
"start": 364,
"end": 383,
"text": "(Garg et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 477,
"end": 485,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "There is no significant difference between the two methods. When compared to extractive reference summaries, sentence-ranking is slightly better except for the compression ratio of 0.1. When compared to abstractive reference summaries, the graphbased method is slightly better. The two systems share the same topic relevance score (REL) and sentiment score, but the sentence-ranking method prefers longer DAs and the graph-based method prefers DAs that are emphasized by the ADJ matrix, such as the DA in the middle of a cluster of utterances from the same speaker, the answer to a question, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "To analyze the effect of dialogue structure we introduce in the graph-based summarization method, we compare two configurations: \u03bb adj = 0 (only using REL score and sentiment score in ranking) and \u03bb adj = 0.3. We generate summaries using these two setups and compare with human selected sentences. Table 4 shows the number of false positive instances (selected by system but not by human) and false negative ones (selected by human but not by system). We use all three annotators' annotation as reference, and consider an utterance as positive if one annotator selects it. This results in a large number of reference summary DAs (because of low human agreement), and thus the number of false negatives in the system output is very high. As expected, a smaller compression ratio (fewer selected DAs in the system output) yields a higher false negative rate and a lower false positive rate. From the results, we can see that generally adding adjacency matrix information is able to reduce both types of errors except when the compression ratio is 0.15.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "The following shows an example, where the third DA is selected by the system with \u03bb adj = 0.3, but not by \u03bb adj = 0. This is partly because the weight of the second DA is enhanced by the the question-\u03bb adj = 0 \u03bb adj = 0.3 ratio FP FN FP FN 0.1 37 588 33 581 0.15 60 542 61 546 0.2 100 516 90 511 0.25 137 489 131 482 Table 4 : The number of false positive (FP) and false negative (FN) instances using the graph-based method with \u03bb adj = 0 and \u03bb adj = 0.3 for different compression ratios.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "answer pair (the first and the second DA), and thus subsequently boosting the score of the third DA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "A: Well what do you think? B: Well, I don't know, I'm thinking about from one to ten what my no would be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "B: It would probably be somewhere closer to, uh, less control because I don't see, -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "We also examined the system output and human annotation and found some reasons for the system errors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "(a) Topic relevance measure. We use the statistics from the Switchboard corpus to measure the relevance of each word to a given topic (PMI score), therefore only when people use the same word in different conversations of the topic, the PMI score of this word and the topic is high. However, since the size of the corpus is small, some topics only contain a few conversations, and some words only appear in one conversation even though they are topicrelevant. Therefore the current PMI measure cannot properly measure a word's and a sentence's topic relevance. This problem leads to many false negative errors (relevant sentences are not captured by our system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "(b) Extraction units. We used DA segments as units for extractive summarization, which can be problematic. In conversational speech, sometimes a DA segment is not a complete sentence because of overlaps and interruptions. We notice that annotators tend to select consecutive DAs that constitute a complete sentence, however, since each individual DA is not quite meaningful by itself, they are often not selected by the system. The following segment is extracted from a dialogue about \"universal health insurance\". The two DAs from speaker B are not selected by our system but selected by human anno-tators, causing false negative errors. B: and it just can devastate -A: and your constantly, -B: -your budget, you know.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "This paper investigates two unsupervised methods in opinion summarization on spontaneous conversations by incorporating topic score and sentiment score in existing summarization techniques. In the sentence-ranking method, we linearly combine several scores in different aspects to select sentences with the highest scores. In the graph-based method, we use an adjacency matrix to model the dialogue structure and utilize it to find salient utterances in conversations. Our experiments show that both methods are able to improve the baseline approach, and we find that the cosine similarity between utterances or between an utterance and the whole document is not as useful as in other document summarization tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we will address some issues identified from our error analysis. First, we will investigate ways to represent a sentence's topic relevance. Second, we will evaluate using other extraction units, such as applying preprocessing to remove disfluencies and concatenate incomplete sentence segments together. In addition, it would be interesting to test our system on speech recognition output and automatically generated DA boundaries to see how robust it is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Please contact the authors to obtain the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/De cisionTreeTagger.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Julia Hirschberg and Ani Nenkova for useful discussions. This research is supported by NSF awards CNS-1059226 and IIS-0939966.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Going beyond traditional QA systems: challenges and keys in opinion question answering",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
},
{
"first": "Ester",
"middle": [],
"last": "Boldrini",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Montoyo",
"suffix": ""
},
{
"first": "Patricio",
"middle": [],
"last": "Mart\u00ednez-Barco",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Balahur, Ester Boldrini, Andr\u00e9s Montoyo, and Patricio Mart\u00ednez-Barco. 2010. Going beyond tra- ditional QA systems: challenges and keys in opinion question answering. In Proceedings of COLING.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "LexRank: graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R. Radev. 2004. LexRank: graph-based lexical centrality as salience in text sum- marization. Journal of Artificial Intelligence Re- search.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech-to-text and speech-tospeech summarization of spontaneous speech",
"authors": [
{
"first": "Sadaoki",
"middle": [],
"last": "Furui",
"suffix": ""
},
{
"first": "Tomonori",
"middle": [],
"last": "Kikuchi",
"suffix": ""
},
{
"first": "Yousuke",
"middle": [],
"last": "Shinnaka",
"suffix": ""
},
{
"first": "Chior",
"middle": [],
"last": "Hori",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Transactions on Audio, Speech & Language Processing",
"volume": "12",
"issue": "4",
"pages": "401--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadaoki Furui, Tomonori Kikuchi, Yousuke Shinnaka, and Chior i Hori. 2004. Speech-to-text and speech-to- speech summarization of spontaneous speech. IEEE Transactions on Audio, Speech & Language Process- ing, 12(4):401-408.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "ClusterRank: a graph based method for meeting summarization",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Korbinian",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "Dilek Hakkani",
"middle": [],
"last": "Reidhammer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Benoit Favre, Korbinian Reidhammer, and Dilek Hakkani T\u00fcr. 2009. ClusterRank: a graph based method for meeting summarization. In Proceed- ings of Interspeech.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A global optimization framework for meeting summarization",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Korbinian",
"middle": [],
"last": "Riedhammer",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Benoit Favre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2009. A global optimization framework for meeting summarization. In Proceed- ings of ICASSP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Switchboard-1 Release 2",
"authors": [
{
"first": "John",
"middle": [
"J"
],
"last": "Godfrey",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Holliman",
"suffix": ""
}
],
"year": 1997,
"venue": "Linguistic Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John J. Godfrey and Edward Holliman. 1997. Switchboard-1 Release 2. In Linguistic Data Consor- tium, Philadelphia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Answering the call for a standard reliability measure for coding data",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Communication Methods and Measures",
"volume": "1",
"issue": "",
"pages": "77--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Hayes and Klaus Krippendorff. 2007. Answer- ing the call for a standard reliability measure for cod- ing data. Journal of Communication Methods and Measures, 1:77-89.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACM SIGKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and sum- marizing customer reviews. In Proceedings of ACM SIGKDD.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic summarization of voicemail messages using lexical and prosodic features",
"authors": [
{
"first": "Konstantinos",
"middle": [],
"last": "Koumpis",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2005,
"venue": "ACM -Transactions on Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantinos Koumpis and Steve Renals. 2005. Auto- matic summarization of voicemail messages using lex- ical and prosodic features. ACM -Transactions on Speech and Language Processing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A comparative study of probabilistic ranking models for chinese spoken document summarization",
"authors": [
{
"first": "Berlin",
"middle": [],
"last": "Shih Hsiang Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Hsin Min",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "8",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shih Hsiang Lin, Berlin Chen, and Hsin min Wang. 2009. A comparative study of probabilistic ranking models for chinese spoken document summarization. ACM Transactions on Asian Language Information Processing, 8(1).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "ROUGE: a package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL workshop on Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: a package for auto- matic evaluation of summaries. In Proceedings of ACL workshop on Text Summarization Branches Out.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "What are meeting summaries? An analysis of human extractive summaries in meeting corpus",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGDial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Liu and Yang Liu. 2008. What are meeting sum- maries? An analysis of human extractive summaries in meeting corpus. In Proceedings of SIGDial.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Comparing lexical, acoustic/prosodic, structural and discourse features for speech summarization",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Maskey",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Maskey and Julia Hirschberg. 2005. Com- paring lexical, acoustic/prosodic, structural and dis- course features for speech summarization. In Pro- ceedings of Interspeech.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "From text to speech summarization",
"authors": [
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Maskey",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Mckeown, Julia Hirschberg, Michel Galley, and Sameer Maskey. 2005. From text to speech summa- rization. In Proceedings of ICASSP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting subjectivity in multiparty speech",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Murray and Giuseppe Carenini. 2009. Detecting subjectivity in multiparty speech. In Proceedings of Interspeech.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Extractive summarization of meeting recordings",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EUROSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Murray, Steve Renals, and Jean Carletta. 2005. Extractive summarization of meeting recordings. In Proceedings of EUROSPEECH.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Sajib",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng, Sajib Dasgupta, and S.M.Niaz Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of re- views. In Proceedings of the COLING/ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Opinion summarization with integer linear programming formulation for sentence extraction and ordering",
"authors": [
{
"first": "Hitoshi",
"middle": [],
"last": "Nishikawa",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hasegawa",
"suffix": ""
},
{
"first": "Yoshihiro",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hitoshi Nishikawa, Takaaki Hasegawa, Yoshihiro Mat- suo, and Genichiro Kikui. 2010. Opinion summariza- tion with integer linear programming formulation for sentence extraction and ordering. In Proceedings of COLING.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A sentiment education: sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lilian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lilian Lee. 2004. A sentiment educa- tion: sentiment analysis using subjectivity summariza- tion based on minimum cuts. In Proceedings of ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Summarizing contrastive viewpoints in opinionated text",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinion- ated text. In Proceedings of EMNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Extracting product features and opinions from reviews",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Pro- ceedings of HLT-EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multimodal subjectivity analysis of multiparty conversation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Raaijmakers",
"suffix": ""
},
{
"first": "Khiet",
"middle": [],
"last": "Truong",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Raaijmakers, Khiet Truong, and Theresa Wilson. 2008. Multimodal subjectivity analysis of multiparty conversation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-perspective question answering using the OpQA corpus",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EMNLP/HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov, Claire Cardie, and Janyce Wiebe. 2005. Multi-perspective question answering using the OpQA corpus. In Proceedings of EMNLP/HLT.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Creating subjective and objective sentence classifiers from unannotated texts",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CICLing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe and Ellen Riloff. 2005. Creating sub- jective and objective sentence classifiers from unan- notated texts. In Proceedings of CICLing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Annotating opinions in the world press",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of SIG-Dial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson and Janyce Wiebe. 2003. Annotating opinions in the world press. In Proceedings of SIG- Dial.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Annotating subjective content in meetings",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson. 2008a. Annotating subjective content in meetings. In Proceedings of LREC.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Fine-grained subjectivity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson. 2008b. Fine-grained subjectivity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states. Ph.D. thesis, University of Pittsburgh.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improving supervised learning for meeting summarization using sampling and regression",
"authors": [
{
"first": "Shasha",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "Computer Speech and Language",
"volume": "24",
"issue": "",
"pages": "495--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shasha Xie and Yang Liu. 2010. Improving super- vised learning for meeting summarization using sam- pling and regression. Computer Speech and Lan- guage, 24:495-514.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic summarization of open-domain multiparty dialogues in dive rse genres",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "",
"pages": "447--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Zechner. 2002. Automatic summarization of open-domain multiparty dialogues in dive rse genres. Computational Linguistics, 28:447-485.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving lecture speech summarization using rhetorical information",
"authors": [
{
"first": "Justin",
"middle": [
"Jian"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Ho Yin Chan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Biannual IEEE Workshop on ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Jian Zhang, Ho Yin Chan, and Pascale Fung. 2007. Improving lecture speech summarization using rhetor- ical information. In Proceedings of Biannual IEEE Workshop on ASRU.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Using query expansion in graph-based approach for queryfocused multi-document summarization",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lide",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Information Processing and Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Zhao, Lide Wu, and Xuanjing Huang. 2009. Using query expansion in graph-based approach for query- focused multi-document summarization. Journal of Information Processing and Management.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Summarization of spontaneous conversations",
"authors": [
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodan Zhu and Gerald Penn. 2006. Summarization of spontaneous conversations. In Proceedings of Inter- speech.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "compare to reference abstractive summaryFigure 1: ROUGE-1 F-scores compared to extractive and abstractive reference summaries for different systems: max-length, sentence-ranking method, graphbased method, and human performance.",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Corpus statistics: topic description, number of</td></tr><tr><td>conversations in each topic, average length (number of</td></tr><tr><td>dialog acts), and standard deviation.</td></tr></table>",
"type_str": "table",
"html": null,
"text": ""
}
}
}
}