ACL-OCL / Base_JSON /prefixS /json /sdp /2021.sdp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:36:15.467660Z"
},
"title": "Extractive Research Slide Generation Using Windowed Labeling Ranking",
"authors": [
{
"first": "Athar",
"middle": [],
"last": "Sefid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pennsylvania State University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jian",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pennsylvania State University",
"location": {}
},
"email": ""
},
{
"first": "C",
"middle": [
"Lee"
],
"last": "Giles",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pennsylvania State University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Presentation slides describing the content of scientific and technical papers are an efficient and effective way to present that work. However, manually generating presentation slides is labor intensive. We propose a method to automatically generate slides for scientific papers based on a corpus of 5000 paper-slide pairs compiled from conference proceedings websites. The sentence labeling module of our method is based on SummaRuNNer, a neural sequence model for extractive summarization. Instead of ranking sentences based on semantic similarities in the whole document, our algorithm measures importance and novelty of sentences by combining semantic and lexical features within a sentence window. Our method outperforms several baseline methods including SummaRuNNer by a significant margin in terms of ROUGE score.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Presentation slides describing the content of scientific and technical papers are an efficient and effective way to present that work. However, manually generating presentation slides is labor intensive. We propose a method to automatically generate slides for scientific papers based on a corpus of 5000 paper-slide pairs compiled from conference proceedings websites. The sentence labeling module of our method is based on SummaRuNNer, a neural sequence model for extractive summarization. Instead of ranking sentences based on semantic similarities in the whole document, our algorithm measures importance and novelty of sentences by combining semantic and lexical features within a sentence window. Our method outperforms several baseline methods including SummaRuNNer by a significant margin in terms of ROUGE score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It has become common practice for researchers to use slides as a visual aid in presenting research findings and innovations. Such slides usually contain bullet points that the researchers believe to be important to show. These bullet points serve both as a reminder to the speaker (when he/she is presenting) and summaries for audiences to understand. Manually creating a set of high-quality slides from an academic paper is time-consuming. We propose a method that automatically selects salient sentences that could be included into the slides, with the purpose of reducing the time and effort for slide generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main challenge for solving this problem is to accurately extract the main points from an academic paper. This is due to the limitations of existing methods to fully encode semantics of sentences and the implicit relations between sentences. Here, we propose an extractive summarizer that identifies the best sentence in a set of consecutive sentence windows. The selection process depends on importance and novelty of the sentence that is modeled by the neural networks. The selected sentences and their frequent noun phrases are then structured in a layered format to make the bullet points of the slides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Presentation slides are usually created with multiple bullet points organized in a multi-level hierarchical structure, usually with phrases summarizing high level topics at the first level and bullets at the second and other levels for further clarification or details. Statistical analysis on our training data set shows that more than 92% of the bullets are in the first and second level and only 8% are in the third layer. Therefore, we built our presentations in two level bullet points only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is threefold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Propose a system that utilizes sentences with high rankings for generating presentation slides for research papers and is used as a starting point in the slide generation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Create and provide PS5K, a corpus of 5000 paper-slide pairs in the field of computer and information science. To the best of our knowledge, this is the largest paper-slide dataset and can be used for training and evaluating slide generation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Propose a novel method to rank sentences within a sentence window, which improved an existing state-of-the-art text-summarization method by a significant margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Summarizing scholarly articles in presentation slides is different from standard text summarization (Xiao and Carenini, 2019) , which focuses on generating a paragraph of free text summary out of a longer document. Automatic slide generation can be done by first extracting salient sentences in a hierarchical order and grouping them into slides that are sequentially aligned with the original paper.",
"cite_spans": [
{
"start": 100,
"end": 125,
"text": "(Xiao and Carenini, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "PPSGen (Hu and Wan, 2014 ) was a framework that automatically generated presentation slides from scientific papers. They applied Support Vector Regressor and Integer Linear Programming (ILP) to rank and select important sentences. Wang et al. (2017) generate slides by extracting phrases from papers and learning the hierarchical relationship between pairs of phrases to build the structure of bullet points. Their model is trained on a small set of 175 paper-slide pairs. The slideSeer (Kan, 2007) project crawled more than 10,000 paper-slide pairs using the Google APIs to search for the slide of papers using their title as a search query. The full set of data is not publicly available (only 20 pairs are available). Compared with previous works, our model is trained and tested on a relatively large set of 5000 paper-slide pairs and the dataset will be publicly available for future works. There had been some work on the alignment of presentations slides to the article sections (Hayama et al., 2005; Kan, 2007; Beamer and Girju, 2009) .",
"cite_spans": [
{
"start": 7,
"end": 24,
"text": "(Hu and Wan, 2014",
"ref_id": "BIBREF5"
},
{
"start": 231,
"end": 249,
"text": "Wang et al. (2017)",
"ref_id": "BIBREF17"
},
{
"start": 487,
"end": 498,
"text": "(Kan, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 986,
"end": 1007,
"text": "(Hayama et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 1008,
"end": 1018,
"text": "Kan, 2007;",
"ref_id": "BIBREF6"
},
{
"start": 1019,
"end": 1042,
"text": "Beamer and Girju, 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "SummaRuNNer (Nallapati et al., 2017 ) is a neural extractive summarizer that treats the summarization task as a sequence labeling problem. Sum-maRuNNer was evaluated on CNN/Daily Mail corpus, which contains news articles that are shorter than research papers. We improve upon the Sum-maRuNNer model for the summarization of scientific papers.",
"cite_spans": [
{
"start": 12,
"end": 35,
"text": "(Nallapati et al., 2017",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Producing a large dataset for summarization of scientific documents is challenging and requires domain experts to make the summaries. The latest CL-Scisumm 2018 summarization task contains only 40 NLP papers with human-annotated reference summaries. Recently, ScisummNet (Yasunaga et al., 2019) expanded the CL-Scisumm to 1000 scientific articles. TalkSum (Lev et al., 2019) summarizes scientific articles based on the transcripts of the presentation talks at conferences.",
"cite_spans": [
{
"start": 271,
"end": 294,
"text": "(Yasunaga et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 356,
"end": 374,
"text": "(Lev et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Using presentation slides made by the authors is promising for the training of deep neural summarization models as more conferences are providing slides with papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We crawled more than 5,000 paper-slide pairs from a manually curated list of websites, e.g., usenix.org and aclweb.org. GROBID (Lopez, 2009) is used to get metadata and the body of the text from scientific papers in PDF format. Presentations are transformed form PDF or PPT format to XML by Apache Tika 1 . The Tika XML files are divided into pages and the text is extracted using Optical Character Recognition (OCR) tools. Most venues of papers in our dataset are in computational linguistics, system, and system security. In our dataset, there are on average 35 pages of slide per presentation and 8 lines of text per slide page. The majority (75%) of papers are published between 2013 and 2019. We used this dataset (called PS5K) to train summarization models to identify important parts of the input document at the sentence level.",
"cite_spans": [
{
"start": 127,
"end": 140,
"text": "(Lopez, 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Generating slides requires identifying important sentences of the input scientific article and consists of three main steps. The first is to label salient sentences in the paper that are literally similar to corresponding slides. The second is to train the model to rank sentences and the final step selects salient sentences based on the predicted scores, size of the summary and the length of the sentences. Afterwards, frequent noun phrases are extracted from the selected sentences to shape the hierarchical structure of the bullet points. The architecture of our model is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 586,
"end": 594,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "The text in manually generated slides may not be directly extracted from the original paper. Instead, text can be truncated, summarized, or rephrased. Therefore, we need to generate extractive labels for sentences of the input document. The sentence labeling process attempts to identify salient sentences that are semantically similar to the corresponding slides. This generates an extractive summary, which will be used as the ground truth for training and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Labeling",
"sec_num": "4.1"
},
{
"text": "The problem is formalized below: A research paper can be represented as a sequence of n sentences D = {s 1 , s 2 , ...s n }, each having a label y i \u2208 {0, 1}, the system predicts p(y i = 1), probability of including sentence i to the summary. SummaRuNNer treats the summarization task as a sequence labeling problem, if adding the sentence to the summary improves the ROUGE score, the sentence is labeled with 1, otherwise it is labeled with 0. This method is suitable for news articles such as CNN/DailyMail (Nallapati et al., 2016) where the first couple of sentences in articles usually cover the main content. Scholarly papers usually contain a hierarchical structure of sections. Each section should have its own summary as a part of the summary of the entire paper. Therefore, the labeling process should be adapted to distribute positive labels across all sections of the paper. However, accurately parsing sections of open domain scholarly papers is non-trivial. Therefore, we propose a windowed labeling approach, in which ranking is performed only within a series of non-overlapping text windows, each of which contains w consecutive sentences. A sentence is labeled as 1 if adding the current sentence increases the ROUGE-1 index. The best window size is determined empirically by trying different widow sizes and calculating the ROUGE score between selected sentences and the presentation slides. Section 5 elaborates on the experiments performed to select the best window size.",
"cite_spans": [
{
"start": 509,
"end": 533,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Labeling",
"sec_num": "4.1"
},
{
"text": "The ranking of sentences depends on their salience, novelty, and content similarity to the ground truth. To quantify these characteristics, a document is represented into a vector. We explore two methods to build the embedding for the whole document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "Simple Document Embedding A simple document embedding can be obtained by calculating the average of sentence encodings generated by a Bidirectional Long Short-Term Memory (BiLSTM) (Hochreiter and Schmidhuber, 1997) . A sentence s i can be encoded as",
"cite_spans": [
{
"start": 180,
"end": 214,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "E s i = [ h i , h i ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "in which E s i is a concatenation of forward ( h i ) and backward ( h i ) hidden states of the last token in sentence s i . The embedding for document D with n sentences is the average of all sentence embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E D = ReLU (W \u00d7 1 n n i=1 E s i + b)",
"eq_num": "(1)"
}
],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "in which ReLU is the activation function, W and b are parameters to be learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "Hierarchical Self Attention Document Embedding This model embeds a document by applying the attention mechanism at both word and sentence levels (Al-Sabahi et al., 2018; Yang et al., 2016) .",
"cite_spans": [
{
"start": 145,
"end": 169,
"text": "(Al-Sabahi et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 188,
"text": "Yang et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "Sentence embeddings are obtained by encoding word-level tokens of a sentence using BiLSTM and then aggregating hidden layers using an attention mechanism. Formally, considering a sentence s i with m words, the sentence encoding h s i is obtained as a concatenation of all m hidden states of word-level tokens (h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "s i = [h 1 , h 2 , ..., h m ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "where h s i \u2208 R m\u00d72d and d is the embedding dimension for each word. The attention weights are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "a word = sof tmax W attn \u00d7 h T s i (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "where W attn \u2208 R k\u00d72d is the model matrix to be learned. Then a word \u2208 R k\u00d7m and the embedding for sentence s i is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E s i = average k (a word \u00d7 h s i )",
"eq_num": "(3)"
}
],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "where E s i \u2208 R 1\u00d72d and k is the attention dimension which is set to 100 in our experiments. Document embeddings (E D ) are generated using sentence embeddings (E s i ) built in the previous step. A similar attention layer is applied on top of sentence embeddings to build the document embedding. The sentence level attention works as the weights to emphasize important sentences in document embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence and Document Embedding",
"sec_num": "4.2"
},
{
"text": "The rank of a sentence depends on its position in the paper, salience, and novelty with respect to the previously selected sentences, calculated below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "pos = position \u00d7 W pos content = E s i \u00d7 W content salience = E D \u00d7 W salience \u00d7 E T s i novelty = summary i \u00d7 W novelty \u00d7 E T s i p(y i = 1) = \u03c3(pos + content + novelty+ salience) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "where W pos \u2208 R 2d\u00d71 ,where W content \u2208 R 2d\u00d71 W salience \u2208 R 2d\u00d72d , and W novelty \u2208 R 2d\u00d72d are parameters to be learned. The position is the position of the sentence in the document specified by a Embedding lookup function, \u03c3 is the sigmoid activation function, and pos is its positional embedding. The salience estimates the importance of a sentence. The novelty represents the novelty of a sentence with respect to the current summery. The summary embedding is the weighted sum of the previous sentences added to summary until sentence i:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "summary i = i\u22121 j=0 p(y i = 1) \u00d7 E s i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "The higher chance of adding the sentence to the summary gives it a bigger portion in the summary embedding. Figure 2 shows the architecture for predicting the score for the third sentence in a document. Figure 2 : Score prediction for sentence 3 depends on document embedding (E D ), sentence embedding, the embedding of the summary built until step 3 (Sum 3 ), and position of the sentence which is 3. The summary is the weighted sum of the embeddings of the first and second sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 2",
"ref_id": null
},
{
"start": 203,
"end": 211,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "With windowed labeling, the positive labels are sparse. To deal with the imbalanced positive labels, the following weighted cross-entropy loss is adopted. The setting of w 1 = \u221285 and w 2 = \u22122 results in the highest ROUGE score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "\u2212 n i=0 w 1 y i \u00d7 log (p(y i = 1)) +w 2 (1 \u2212 y i ) \u00d7 log (1 \u2212 p(y i = 1)) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ranking",
"sec_num": "4.3"
},
{
"text": "To select the sentences for the slide we tried 1) the greedy approach that sequentially adds sentences with highest scores until the maximum limit is hit and 2) the ILP method that selects the sentences by optimizing the following function using IBM CPLEX Optimizer 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "4.4"
},
{
"text": "max i\u2208Ns l i x i \u00d7 p(y i = 1) i l i x i < maxLen, \u2200i, x i \u2208 {0, 1} (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "4.4"
},
{
"text": "where p(y i = 1) is the score of the sentence predicted by the model, x i is a binary variable showing whether sentence i is selected for the summary or not, l i is the length of sentence i and penalizes short sentences, and maxLen is the maximum length of the summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "4.4"
},
{
"text": "A typical presentation slide includes a limited number of bullet points as the first-level, which are usually phrases or shortened sentences. Some slides may contain second-level bullet points for further breakdowns. Table 2 shows that less than 8% of the content of the presentations in the ground truth corpus is covered in third-level bullets. We generate slides containing up to 2 bullet levels. Table 2 also shows that a slide title on average contains 4 words and either Level 1 or Level 2 bullets contains on average 8 words. Each slide consists of on average 36 words in 5 bullets and each level-1 bullet includes 2 second-level bullets.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 400,
"end": 407,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Slide Generation",
"sec_num": "4.5"
},
{
"text": "Sentences selected are treated as the second-level bullets. The first-level bullets are the noun phrases extracted from the sentences. Noun phrases are removed if they contain more than 10 words or just 1 word. Noun phrases with a document frequency greater than 10 are excluded (e.g. \"the model\"). The section, which the first sentence of a slide is in, is found and its heading is used as the slide title. (Sefid et al., 2019) 36.33 8.73 17.02 -TextRank (Barrios et al., 2016) 38 The heading is truncated to the first 5 tokens. We limit a maximum of 4 sentences per slide. If a topic has more than 4 related sentences, the slide is split into two distinct ones.",
"cite_spans": [
{
"start": 408,
"end": 428,
"text": "(Sefid et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 456,
"end": 478,
"text": "(Barrios et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Slide Generation",
"sec_num": "4.5"
},
{
"text": "We estimated the parameters of our model on PS5K. We split the dataset into training, validation, and testing set, each consisting of 4500, 250, and 250 pairs, respectively. We experimented with different window sizes and found that a window size of w = 10 gives the best ROUGE-1 recall (Table 3) and is adapted for our model. The Stanford CoreNLP ) is used to tokenize and lemmatize sentences to the constituent tokens and to extract noun phrases. GloVe (Pennington et al., 2014) 50-dimensional vectors are used to initialize the word embeddings. With the AdaDelta optimizer and a learning rate of 0.1, we trained for 50 epochs. The sentences are truncated or padded to have 50 tokens (only 8% sentences consist of more than 50 tokens). Similarly, we adopt a fixed document size of 500 sentences (only 3.5% of documents in our dataset have more than 500 sentences). We used the standard ROUGE score (Lin, 2004) to evaluate the summaries. The ROUGE scores for summaries are tabulated in Table 1. The summary size can not exceed 20% of the size of the input document in words. TextRank (Mihalcea and Tarau, 2004 ) is a graph based summarizer that applies the Google PageRank (Page et al., 1999) algorithm to rank the sentences. Sefid et al. (Sefid et al., 2019) rank the sentences by combining surface features, semantic and contextual embeddings. The windowed SummaRuNNer+ILP model outperforms the base SummaRuNNer by at least 3 points in ROUGE-1 recall. Adding attention layer to the model does not improve the ROUGE score while it increases the training time considerably as there are more parameters to be trained.",
"cite_spans": [
{
"start": 455,
"end": 480,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 900,
"end": 911,
"text": "(Lin, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 1085,
"end": 1110,
"text": "(Mihalcea and Tarau, 2004",
"ref_id": "BIBREF11"
},
{
"start": 1174,
"end": 1193,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF14"
},
{
"start": 1240,
"end": 1260,
"text": "(Sefid et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 287,
"end": 296,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "We create and make available PS5K, which is a large slide-paper dataset consisting of 5,000 scientific articles and corresponding manually made slides. This dataset can be used for scientific document summarization and slide generation. We used state of the art extractive summarization methods to summarize scientific articles. Our results show that distributing the positive labels across all sections of a scientific paper, in contrast to summarization methods for news articles, considerably improves performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://tika.apache.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ibm.com/products/ilog-cplex-optimizationstudio",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A hierarchical structured selfattentive model for extractive document summarization (hssas)",
"authors": [
{
"first": "Kamal",
"middle": [],
"last": "Al-Sabahi",
"suffix": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Zuping",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Nadher",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "24205--24212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamal Al-Sabahi, Zhang Zuping, and Mohammed Nadher. 2018. A hierarchical structured self- attentive model for extractive document summariza- tion (hssas). IEEE Access, 6:24205-24212.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Variations of the similarity function of textrank for automated summarization",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Barrios",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Argerich",
"suffix": ""
},
{
"first": "Rosa",
"middle": [],
"last": "Wachenchauzer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Barrios, Federico L\u00f3pez, Luis Argerich, and Rosa Wachenchauzer. 2016. Variations of the simi- larity function of textrank for automated summariza- tion. CoRR, abs/1602.03606.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Investigating automatic alignment methods for slide generation from academic papers",
"authors": [
{
"first": "Brandon",
"middle": [],
"last": "Beamer",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "111--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandon Beamer and Roxana Girju. 2009. Investi- gating automatic alignment methods for slide gen- eration from academic papers. In Proceedings of the Thirteenth Conference on Computational Natu- ral Language Learning, pages 111-119. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Alignment between a technical paper and presentation sheets using a hidden markov model",
"authors": [
{
"first": "Tessai",
"middle": [],
"last": "Hayama",
"suffix": ""
},
{
"first": "Hidetsugu",
"middle": [],
"last": "Nanba",
"suffix": ""
},
{
"first": "Susumu",
"middle": [],
"last": "Kunifuji",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2005 International Conference on Active Media Technology",
"volume": "",
"issue": "",
"pages": "102--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tessai Hayama, Hidetsugu Nanba, and Susumu Kuni- fuji. 2005. Alignment between a technical paper and presentation sheets using a hidden markov model. In Proceedings of the 2005 International Conference on Active Media Technology, 2005.(AMT 2005)., pages 102-106. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ppsgen: Learningbased presentation slides generation for academic papers",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE transactions on knowledge and data engineering",
"volume": "27",
"issue": "4",
"pages": "1085--1097",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Hu and Xiaojun Wan. 2014. Ppsgen: Learning- based presentation slides generation for academic papers. IEEE transactions on knowledge and data engineering, 27(4):1085-1097.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Slideseer: A digital library of aligned document and presentation pairs",
"authors": [
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries",
"volume": "",
"issue": "",
"pages": "81--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min-Yen Kan. 2007. Slideseer: A digital library of aligned document and presentation pairs. In Pro- ceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries, pages 81-90. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Lev",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Shmueli-Scheuer",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Achiya",
"middle": [],
"last": "Jerbi",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Konopnicki",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01351"
]
},
"num": null,
"urls": [],
"raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks. arXiv preprint arXiv:1906.01351.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Grobid: Combining automatic bibliographic data recognition and term extraction for scholarship publications",
"authors": [
{
"first": "Patrice",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 13th European Conference on Research and Advanced Technology for Digital Libraries, ECDL'09",
"volume": "",
"issue": "",
"pages": "473--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrice Lopez. 2009. Grobid: Combining automatic bibliographic data recognition and term extraction for scholarship publications. In Proceedings of the 13th European Conference on Research and Ad- vanced Technology for Digital Libraries, ECDL'09, pages 473-474.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Thirty-First AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Abstractive text summarization using sequence-to-sequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "The SIGNLL Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summariza- tion using sequence-to-sequence rnns and beyond. The SIGNLL Conference on Computational Natural Language Learning (CoNLL), 2016.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The pagerank citation ranking: Bringing order to the web",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Page",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [],
"last": "Motwani",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic slide generation for scientific papers",
"authors": [
{
"first": "Athar",
"middle": [],
"last": "Sefid",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "C Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Athar Sefid, Jian Wu, Prasenjit Mitra, and C Lee Giles. 2019. Automatic slide generation for scientific pa- pers.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Phrase-based presentation slides generation for academic papers",
"authors": [
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Shikang",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida Wang, Xiaojun Wan, and Shikang Du. 2017. Phrase-based presentation slides generation for aca- demic papers. In AAAI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Extractive summarization of long documents by combining global and local context",
"authors": [
{
"first": "Wen",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3009--3019",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1298"
]
},
"num": null,
"urls": [],
"raw_text": "Wen Xiao and Giuseppe Carenini. 2019. Extractive summarization of long documents by combining global and local context. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019, pages 3009-3019. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computa- tional linguistics: human language technologies, pages 1480-1489.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks",
"authors": [
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"R"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7386--7393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large an- notated corpus and content-impact models for scien- tific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7386-7393.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Main components of the model for summarizing the paper and building the slides.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "ROUGE scores for different models. Oracle and TextRank are unsupervised and do not need training. T tr standards for training time in hours based on Nvidia GTX 2080 Ti GPU. SRNN stands for SummaRuNNer.",
"content": "<table><tr><td>Models</td><td colspan=\"4\">ROUGE-1 ROUGE-2 ROUGE-L T tr</td></tr><tr><td>Oracle (window=10)</td><td>57.12</td><td>16.53</td><td>27.62</td><td>-</td></tr><tr><td>Sefid et al.</td><td/><td/><td/><td/></tr></table>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Bullet points statistics.",
"content": "<table><tr><td colspan=\"3\">Bullet-Point Fraction Avg Word Count</td></tr><tr><td>Title</td><td>-</td><td>3.7</td></tr><tr><td>Level 1</td><td>56.5%</td><td>7.38</td></tr><tr><td>Level 2</td><td>35.5%</td><td>7.22</td></tr><tr><td>Level 3</td><td>7.9%</td><td>6.7</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"text": "ROUGE scores for oracle summaries generated with different window sizes.",
"content": "<table><tr><td colspan=\"2\">Window Size R-1</td><td>R-2</td><td>R-L</td></tr><tr><td>3</td><td colspan=\"3\">42.95 11.13 21.59</td></tr><tr><td>5</td><td colspan=\"3\">44.34 11.43 22.35</td></tr><tr><td>7</td><td colspan=\"3\">44.88 11.64 22.47</td></tr><tr><td>10</td><td colspan=\"3\">45.93 12.00 22.75</td></tr><tr><td>15</td><td colspan=\"3\">45.52 11.84 22.68</td></tr></table>"
}
}
}
}