Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:37.552312Z"
},
"title": "Using Text Segmentation Algorithms for the Automatic Generation of E-Learning Courses",
"authors": [
{
"first": "Can",
"middle": [],
"last": "Beck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fraunhofer IOSB",
"location": {
"settlement": "Karlsruhe",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Alexander",
"middle": [],
"last": "Streicher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fraunhofer IOSB",
"location": {
"settlement": "Karlsruhe",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Andrea",
"middle": [],
"last": "Zielinski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fraunhofer IOSB",
"location": {
"settlement": "Karlsruhe",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With the advent of e-learning, there is a strong demand for tools that help to create e-learning courses in an automatic or semi-automatic way. While resources for new courses are often freely available, they are generally not properly structured into easy to handle units. In this paper, we investigate how state of the art text segmentation algorithms can be applied to automatically transform unstructured text into coherent pieces appropriate for e-learning courses. The feasibility to course generation is validated on a test corpus specifically tailored to this scenario. We also introduce a more generic training and testing method for text segmentation algorithms based on a Latent Dirichlet Allocation (LDA) topic model. In addition we introduce a scalable random text segmentation algorithm, in order to establish lower and upper bounds to be able to evaluate segmentation results on a common basis.",
"pdf_parse": {
"paper_id": "S14-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "With the advent of e-learning, there is a strong demand for tools that help to create e-learning courses in an automatic or semi-automatic way. While resources for new courses are often freely available, they are generally not properly structured into easy to handle units. In this paper, we investigate how state of the art text segmentation algorithms can be applied to automatically transform unstructured text into coherent pieces appropriate for e-learning courses. The feasibility to course generation is validated on a test corpus specifically tailored to this scenario. We also introduce a more generic training and testing method for text segmentation algorithms based on a Latent Dirichlet Allocation (LDA) topic model. In addition we introduce a scalable random text segmentation algorithm, in order to establish lower and upper bounds to be able to evaluate segmentation results on a common basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The creation of e-learning courses is generally a time consuming effort. However, separating text into topically cohesive segments can help to reduce this effort whenever textual content is already available but not properly structured according to e-learning standards. Since these seg-ments textually describe the content of learning units, automatic pedagogical annotation algorithms could be applied to categorize them into introductions, descriptions, explanations, examples and other pedagogical meaningful concepts (K.Sathiyamurthy & T.V.Geetha, 2011) .",
"cite_spans": [
{
"start": 522,
"end": 558,
"text": "(K.Sathiyamurthy & T.V.Geetha, 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Course designers generally assume that learning content is composed of small inseparable learning objects at the micro level which in turn are wrapped into Concept Containers (CCs) at the macro level. This approach is followed, e.g., in the Web-Didactic approach by Swertz et al. (2013) where CCs correspond to chapters in a book and Knowledge Objects (KOs) correspond to course pages. To automate the partition of an unstructured text source into appropriate segments for the macro and micro level we applied different text segmentation algorithms (segmenters) on each level.",
"cite_spans": [
{
"start": 266,
"end": 286,
"text": "Swertz et al. (2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the segmenters in the described scenario, we created a test corpus based on featured Wikipedia articles. For the macro level we exploit sections from different articles and the corresponding micro structure consists of subsequent paragraphs from these sections. On the macro level the segmenter TopicTiling (TT) by Riedl and Biemann (2012) is used. It is based on a LDA topic model which we train based on the articles from Wikipedia to extract a predefined number of different topics. On the micro level, the segmenter BayesSeg (BS) is applied (Eisenstein & Barzilay, 2008) .",
"cite_spans": [
{
"start": 327,
"end": 351,
"text": "Riedl and Biemann (2012)",
"ref_id": "BIBREF19"
},
{
"start": 557,
"end": 586,
"text": "(Eisenstein & Barzilay, 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We achieved overall good results measured in three different metrics over a baseline approach, i.e., a scalable random segmenter, that indicate text segmentation algorithms are ready to be applied to facilitate the creation of e-learning courses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows: Section 2 gives an overview of related work on automatic course generation as well as text segmentation applications. In the main sections 3 and 4 we describe our approach and evaluation results on our corpus. In the last section we summarize the presented findings and give an outlook on further research needed for the automatic generation of e-learning courses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic course generation can roughly be divided into two different areas. One is concerned with generation from existing courses and is mainly focused on adaption to the learner or instructional plans see Lin et al. (2009 ), Capuno et al. (2009 and Tan et al. (2010) . The other area is the course creation itself on which we focus on in this paper.",
"cite_spans": [
{
"start": 208,
"end": 224,
"text": "Lin et al. (2009",
"ref_id": "BIBREF16"
},
{
"start": 225,
"end": 247,
"text": "), Capuno et al. (2009",
"ref_id": null
},
{
"start": 252,
"end": 269,
"text": "Tan et al. (2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since the publication of the segmenter Text-Tiling by Hearst (1997) at least a dozen different segmenters have been developed. They can be divided into linear and hierarchical segmenters. Linear segmenters process the text sequentially sentence by sentence. Hierarchical segmenters first process the whole text and extract topics with varying granularities. These topics are then agglomerated based on a predefined criterion.",
"cite_spans": [
{
"start": 54,
"end": 67,
"text": "Hearst (1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Linear segmenters have been developed by Kan et al. (1998) and Galley et al. (2003) . One of the first probabilistic algorithms has been introduced by Utiyama and Isahara (2001) . LDA based approaches were first described by Sun et al. (2008) and improved by Misra et al. (2009) . The newest LDA based segmenter is TT. It performs linear text segmentation based on a pretrained LDA topic model and calculates the similarity between segments (adjacent sentences) to measure text coherence on the basis of a topic vector representation using cosine similarity. For reasons of efficiency, only the most frequent topic ID is assigned to each word in the sentence, using Gibbs sampling.",
"cite_spans": [
{
"start": 41,
"end": 58,
"text": "Kan et al. (1998)",
"ref_id": "BIBREF13"
},
{
"start": 63,
"end": 83,
"text": "Galley et al. (2003)",
"ref_id": "BIBREF8"
},
{
"start": 151,
"end": 177,
"text": "Utiyama and Isahara (2001)",
"ref_id": "BIBREF25"
},
{
"start": 225,
"end": 242,
"text": "Sun et al. (2008)",
"ref_id": "BIBREF21"
},
{
"start": 259,
"end": 278,
"text": "Misra et al. (2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hierarchical text segmentation algorithms were first introduced by Yaari (1997) . The latest approach by Eisenstein (2008) uses a generative Bayesian model BS for text segmentation, assuming that a) topic shifts are likely to occur at points marked by cue phrases and b) a linear discourse structure. Each sentence in the document is modeled by a language model associated with a segment. The algorithm then calculates the maximum likelihood estimates of observing the whole sequence of sentences at selected topic boundaries.",
"cite_spans": [
{
"start": 67,
"end": 79,
"text": "Yaari (1997)",
"ref_id": "BIBREF26"
},
{
"start": 105,
"end": 122,
"text": "Eisenstein (2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The applications of text segmentation algorithms range from information retrieval (Huang, et al., 2002) to topic tracking and segmentation of multi-party conversations (Galley, et al., 2003) .",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Huang, et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 168,
"end": 190,
"text": "(Galley, et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similar to our work Sathiyamurthy and Geetha (2011) showed how LDA based text segmentation algorithms combined with hierarchical domain ontology and pedagogical ontology can be applied to content generation for e-learning courses. They focussed on the segmentation of existing e-learning material in the domain of computer science and introduced new metrics to measure the segmentation results with respect to concepts from the ontologies. Our work focusses on the appropriate segmentation of unstructured text instead of existing e-learning material. Although the usage of domain models is an interesting approach the availability of such models is very domain dependent. We rely on the LDA model parameters and training to accomplish a word to topic assignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Rather than introducing new aspects such as pedagogical concepts we investigated the general usability of segmentation algorithms with focus on the macro and micro structure which is characteristic for most e-learning content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The main objective is to provide e-learning course designers with a tool to efficiently organize existing textual content for new e-learning courses. This can be done by the application of text segmenters that automatically generate the basic structure of the course. The intended webdidactic conform two-level structure differentiates between macro and micro levels. The levels have different requirements with respect to thematic coherence: the CCs are thematically rather independent and the KOs within each CC need to be intrinsically coherent but still separable. We chose the linear LDA-based segmenter TT to find the boundaries between CCs. The LDAbased topic model can be trained on content which is topically related to the target course. This approach gives the course creator flexibility in the generation of the macro level structure by either adjusting the training documents or by changing the number and size of topics that should be extracted for the topic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Generation of E-Learning Courses",
"sec_num": "3"
},
{
"text": "On the micro level we did not use TT. The training of an appropriate LDA model would have to be done for every CC separately since they are thematically relatively unrelated. Apart from that the boundaries between the KOs should be an optimal division for a given number of expected boundaries. The reason for this is that the length of KOs should be adapted to the intended skill and background of the learners. This is why we decided to use the hierarchical segmenter BS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Generation of E-Learning Courses",
"sec_num": "3"
},
{
"text": "To evaluate segmenters many different corpora have been created. The most commonly used corpus was introduced by Choi (2000) . It is based on the Brown Corpus and contains 700 samples, each containing a fixed number of sentences from 10 different news texts, which are randomly chosen from the Brown Corpus. Two other widely tested corpora were introduced by Galley et al. (2003) . Both contain 500 samples, one with concatenated texts from the Wall Street Journal (WSJ) and the other with concatenated texts from the Topic Detection and Tracking (TDT) corpus (Strassel, et al., 2000) . A standard for the segmentation of speech is the corpus from the International Computer Science Institute (ICSI) by Janin et al. (2003) . A medical text book has been used by Eisenstein and Barzilay (2008) . The approaches to evaluate segmenters are always similar: they have to find the boundaries in artificially concatenated texts.",
"cite_spans": [
{
"start": 113,
"end": 124,
"text": "Choi (2000)",
"ref_id": "BIBREF3"
},
{
"start": 359,
"end": 379,
"text": "Galley et al. (2003)",
"ref_id": "BIBREF8"
},
{
"start": 560,
"end": 584,
"text": "(Strassel, et al., 2000)",
"ref_id": "BIBREF20"
},
{
"start": 703,
"end": 722,
"text": "Janin et al. (2003)",
"ref_id": "BIBREF12"
},
{
"start": 762,
"end": 792,
"text": "Eisenstein and Barzilay (2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application Setting and Corpus",
"sec_num": "3.1"
},
{
"text": "We developed our own dataset because we wanted to use text that potentially could be used as a basis for creating e-learning courses. We therefore need samples which, on the one hand, have relatively clear topic boundaries on the macro level and, on the other hand resemble the differences in number of topics and inter-topic cohesion on the micro level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application Setting and Corpus",
"sec_num": "3.1"
},
{
"text": "We based our corpus on 530 featured 1 articles from 6 different categories of the English Wikipedia. It can be assumed that Wikipedia articles are often the source for learning courses. We used featured articles because the content structure is very consistent and clear, i.e., sections and paragraphs are well defined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application Setting and Corpus",
"sec_num": "3.1"
},
{
"text": "The corpus is divided into a macro and micro dataset in the following manner: The macro da-taset contains 1200 samples. Each sample is a concatenation of paragraphs from 6-8 different sections from featured articles. Each topic in a sample consists of 3-6 subsequent paragraphs from a randomly selected section. We propose that one paragraph describes one KO. One CC contains all KOs which are from the same section in the article. Thus, one sample from the macro dataset contains 6-8 CCs, each containing 3-6 KOs. The segmentation task is to find the topic boundaries between the CCs. The macro dataset is quite similar in structure to the Choi-Corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application Setting and Corpus",
"sec_num": "3.1"
},
{
"text": "The micro dataset is extracted from the macro dataset. It contains 8231 samples, where each sample contains all KOs from one CC of the macro dataset. The segmentation task is to find the topic boundaries between the KOs, i.e, subsequent paragraphs of one section, see All texts in our corpus are stemmed and stopwords are removed with the NLP-Toolkit for Python (Bird, et al., 2009) using an adapted variant 2 of the keyword extraction method by Kim et al. (2013) . The macro and micro dataset themselves are divided into multiple subsets to evaluate the stability of the segmenters when the number of sentences per topic or the number of topics per sample have changed. The detailed configuration is shown in Table 1 and 2. Each subset is identified by the number of CCs per sample and the number of KOs per CC (the subset is denoted as #CC_#KO). Subsets of the micro dataset are identified by a single value which is the number of KOs per sample (#KO). In Table 1 the identifier R means that the number of CCs or KOs is not the same for all samples, it is chosen randomly from the set depicted by curly brackets.",
"cite_spans": [
{
"start": 362,
"end": 382,
"text": "(Bird, et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 446,
"end": 463,
"text": "Kim et al. (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 710,
"end": 717,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 958,
"end": 965,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Application Setting and Corpus",
"sec_num": "3.1"
},
{
"text": "CCs per sample The important difference between the macro and micro dataset is that every subset of the macro dataset contains a constant number of topics which differ in number of sentences per topic between 20 and 40, except the subset R_R which contains a random number of topics between 6 and 8. In contrast, each micro-level subset differs in number of topics but not significantly in the number of sentences per topic. This difference between the datasets allows us to focus on the different level-specific aspects. On the macro dataset we can evaluate the stability of TT over topics with highly varying lengths and on the micro dataset we can evaluate BS when the number of strongly coherent topics changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID",
"sec_num": null
},
{
"text": "The performance of a segmenter cannot simply be measured by false positive and false negative boundaries compared to the true boundaries because, if the predicted boundary is only one sentence away from the true boundary this could still be very close, e.g., if the next true topic boundary is 30 sentences away. Thus, the relative proximity to true boundaries should also be considered. There is an ongoing discussion about what kind of metric is appropriate to measure the performance of segmenters (Fournier & Inkpen, 2012) . Most prominent and widely used are WindowDiff wd (Pevzner & Hearst, 2002) and the probabilistic metric pk (Beeferman, et al., 1999) . The basic principle is to slide a window of fixed size over the segmented text, i.e., fixed number of words or sentences, and assess whether the sentences on the edges are correctly segmented with respect to each other. Both metrics wd and pk are penalty metrics, therefore lower values indicate better segmentations. The problem with these metrics is that they strongly depend on the arbitrarily defined window size parameter and do not penalize all error types equally, e.g., pk penalizes false negatives more than false positives and wd penalizes false positive and negative boundaries more at the beginning and end of the text (Lamprier, et al., 2007) . Because of that we also used a rather new metric called BoundarySimilarity b. This metric is parameter independent and has been developed by Fournier and Inkpen (2013) to solve the mentioned deficiencies. Since b measures the similarity between the boundaries, higher values indicate better segmentations. We used the implementations of wd, pk and b by Fournier 3 (wd and pk with default parameters).",
"cite_spans": [
{
"start": 501,
"end": 526,
"text": "(Fournier & Inkpen, 2012)",
"ref_id": "BIBREF7"
},
{
"start": 578,
"end": 602,
"text": "(Pevzner & Hearst, 2002)",
"ref_id": "BIBREF18"
},
{
"start": 635,
"end": 660,
"text": "(Beeferman, et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 1294,
"end": 1318,
"text": "(Lamprier, et al., 2007)",
"ref_id": "BIBREF15"
},
{
"start": 1462,
"end": 1488,
"text": "Fournier and Inkpen (2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Segmentation Metrics",
"sec_num": "3.2"
},
{
"text": "Riedl and Biemann evaluated TT on the Choi-Corpus based on a 10-fold cross validation. Thus, the LDA topic model was generated with 90% of the samples and TT then tested on the remaining 10% of the samples. The 700 samples in the Choi-Corpus are only concatenations of 1111 different excerpts from the Brown Corpus and each sample contains 10 of these excepts it is clear that there are just not enough excerpts to make sure that the samples in the training set do not contain any excerpt that is also part of some samples in the testing set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LDA Topic Model Training",
"sec_num": "3.3"
},
{
"text": "That is one reason why we do not use the same approach since we want to make sure that training and testing sets are truly disjoint to evaluate TT on the macro dataset. The other reason is that the topic structure generated by TT should be based on an LDA topic model with topics extracted from documents which are thematically related to certain parts of the course that is to be created without using its text source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LDA Topic Model Training",
"sec_num": "3.3"
},
{
"text": "We train the LDA topic model to extract topics from the real Wikipedia articles. This model is then used to evaluate TT on the macro dataset and not the Wikipedia articles. This approach has consequences for the LDA topic model training and respective TT testing sets, since the LDA training set contains real articles and the TT test set contains the samples from the macro dataset. Because training and testing set should truly be disjoint we cannot train with any article that is part of a sample from the test set. Because each test sample from the macro dataset contains parts of 6 to 8 articles, the training set is reduced by a large factor, even with little test set size, which is shown for different number of folds (k) for cross validation in Table 3 . If we truly separate training and testing sets and train the LDA topic model with real articles a 10fold cross validation leads to very small training sets (only 26% of all articles are used), which is why we also used higher folds to evaluate the results of TT on the macro dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 754,
"end": 761,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "LDA Topic Model Training",
"sec_num": "3.3"
},
{
"text": "We evaluated TT on the macro dataset without providing the number of boundaries. On the micro dataset we evaluated BS with the expected number of boundaries provided. We also implemented a scalable random segmenter (RS) to compare TT and BS against some algorithm with interpretable performance. The interpretation of the values in any metric even with respect to different metrics is very difficult without comparison to another segmenter. For every true boundary in a document, RS predicts a boundary drawn from a normally distributed set around the true boundary with scalable standard deviation \u03c3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4"
},
{
"text": "Thus smaller values for \u03c3 result in better segmentations because the probability of selecting the true boundary increases, e.g., for \u03c3 = 2, more than 68% of all predicted boundaries are at most 2 sentences away from the true boundary and more than 99% of all predicted boundaries are located within a range of 6 sentences from it. But whether 6 sentences is a large or small distance should depend on the average topic size. We therefore relate the performance of RS to the mean number of sentence per topic by defining \u03c3 in percentages of that number as shown in the table below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4"
},
{
"text": "Standard Deviation very close \u03c3 = 0% -5% close \u03c3 = 5% -15% large \u03c3 = 15% -30% Table 4 : Defined performance of RS for different standard deviations \u03c3, given in percentage of mean sentences per topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distance from True Boundary:",
"sec_num": null
},
{
"text": "To give an example, the subset 7_6 of the macro dataset has an average of 40 sentences per topic, therefore RS with \u03c3=15% means that it is set to 6 which is 15% of 40. This is defined as a medium performance in Table 4 because 68% of the boundaries predicted are within a range of 6 sentences from the true boundaries and 99% within 18 sentences. One important difference between the macro and micro dataset is that all subsets of the macro dataset have 7 topics, differing in length, except for subset R_R where this number is only slightly varied (Table 1) . In contrast, all topics subsets of the micro dataset have roughly the same number of sentences but highly differ in the number of topics (Table 2) . We therefore do not compare the performance of BS and TT since they are evaluated on quite different datasets designed for testing different types of segmentation tasks relevant to course generation, as explained earlier.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 4",
"ref_id": null
},
{
"start": 549,
"end": 558,
"text": "(Table 1)",
"ref_id": "TABREF1"
},
{
"start": 698,
"end": 707,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Distance from True Boundary:",
"sec_num": null
},
{
"text": "We compare both to RS for different standard deviations \u03c3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance from True Boundary:",
"sec_num": null
},
{
"text": "For the LDA topic model training we used the following default parameters: alpha=0.5, beta=0.1,ntopics=100,niters=1000, twords=20,savestep=100, for details we refer to (Griffiths & Steyvers, 2004) . To compare TT's performance for different folds of the macro dataset we optimized the window parameter which has to be set for TT, it specifies the number of sentences to the left and to the right of the current position p between two sentences that are used to calculate the coherence score between these sentences (Riedl & Biemann, 2012) . The performance for TT has been best with window sizes between 9 and 11 for all metrics as shown in Figure 2 . As expected, higher folds increase TT's overall performance especially with respect to metric b (Figure 3 ). This is due to the larger training set sizes of the LDA topic model. In general smaller window sizes increase the number of predicted boundaries. The optimal window size is between 9 and 11 and we would expect the measures for 5 and 15 to be similar ( Figure 2 ). This is only the case for metric b, the metrics wd and pk seem to penalize false positives more than false negatives. This would be a contradiction to the findings of Lamprier et al. (2007) since they actually found the opposite to be true. This behaviour is explained by the nonlinear relation between the window parameter and number of predicted boundaries by TT as shown in Figure 4 . Another important finding is the stability of TT's performance over different window sizes (from 9 to 11). This is important since a very sensitive behaviour would be very difficult to handle for course creators because they would have to estimate this parameter in advance. For the following detailed evaluation TT window size is set to 9 because of the best overall results with respect to metric b and 30-fold cross validation. The detailed performance with respect to metric wd, pk and b of TT compared to RS with different standard deviations \u03c3 is shown in Figure 5 i), ii) and iii).",
"cite_spans": [
{
"start": 168,
"end": 196,
"text": "(Griffiths & Steyvers, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 515,
"end": 538,
"text": "(Riedl & Biemann, 2012)",
"ref_id": "BIBREF19"
},
{
"start": 1192,
"end": 1214,
"text": "Lamprier et al. (2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 641,
"end": 649,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 748,
"end": 757,
"text": "(Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 1013,
"end": 1021,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1402,
"end": 1410,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 1975,
"end": 1983,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Results for TopicTiling on the Macro Dataset",
"sec_num": "4.1"
},
{
"text": "i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for TopicTiling on the Macro Dataset",
"sec_num": "4.1"
},
{
"text": "TT measured with metric b. TT measured with metric wd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for TopicTiling on the Macro Dataset",
"sec_num": "4.1"
},
{
"text": "iii.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for TopicTiling on the Macro Dataset",
"sec_num": "4.1"
},
{
"text": "TT measured with metric pk. First of all we want to point out that the graphs of RS for different values of \u03c3 are ordered as expected by all metrics. Lower percentages indicate better results. And with respect to metric wd and pk the performance for each \u03c3 is nearly constant over all subsets, which indicates that the metrics correctly consider the relative distance of a predicted boundary from the true boundary by using the mean number of sentences per topic. In metric b only the RS with \u03c3=30%, 15% and 5% are constant. For \u03c3=5% there is a strong decrease in performance for subsets with more sentences per topic. The overall performance of TT is between that of RS for \u03c3=1% and \u03c3=15%, except for subset 7_6 with respect to metric wd. With respect to metric b TT even predicts very close boundaries. In all metrics TT has the worst results on subset 7_6, which has the largest number of sentences per topic (see Table 1 ). This is due to TT's window parameter which influences the number of predicted boundaries as shown in Figure 4 ",
"cite_spans": [],
"ref_spans": [
{
"start": 917,
"end": 924,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1029,
"end": 1037,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results for TopicTiling on the Macro Dataset",
"sec_num": "4.1"
},
{
"text": "BS does not need any training or parameter fitting, since it is provided with the number of expected segments. We therefore used the default parameter settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for BayesSeg on the Micro Dataset",
"sec_num": "4.2"
},
{
"text": "i. BS measured with metric b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for BayesSeg on the Micro Dataset",
"sec_num": "4.2"
},
{
"text": "ii. BS measured with metric wd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for BayesSeg on the Micro Dataset",
"sec_num": "4.2"
},
{
"text": "iii. BS measured with metric pk. As expected, the performance of RS is decreasing for higher values of \u03c3 in all metrics ( Figure 6 i), ii), iii)). For metric wd and pk the increasing Subset mean number of topics leads to slightly increasing penalties for constant values of \u03c3, which clearly indicates that the metrics do not treat all errors equally, as repeatedly pointed out. Metric b treats errors equally over increasing number of topics for RS. BS predicts with respect to all metrics close boundaries since it is better than RS with \u03c3=15% except on subset 6 (Table 4) . With an increasing number of topics BS is getting worse in all metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 130,
"text": "Figure 6",
"ref_id": "FIGREF8"
},
{
"start": 564,
"end": 573,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for BayesSeg on the Micro Dataset",
"sec_num": "4.2"
},
{
"text": "Comparing the measures of metric b for macro and micro dataset it seems that it handles increasing numbers of topics better than increasing size of topics. On the micro dataset the results with respect to all metrics are far more similar than the once on the macro dataset, where the differences are very large. Since we are only interested in comparative measures of the performance of the segmenters and RS, which has shown to be a very useful approach to interpret segmentation results, we leave detailed explanations of the metrics behaviours itself to further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for BayesSeg on the Micro Dataset",
"sec_num": "4.2"
},
{
"text": "We demonstrated that text segmentation algorithms can be applied to the generation of elearning courses. We use a web-didactic approach that is based on a flat two-level hierarchical structure. A new corpus has been compiled based on featured articles from the English Wikipedia that reflects this kind of course structure. On the broader macro level we applied the linear LDA-based text segmentation algorithm TopicTiling without providing the expected number of boundaries. The LDA topic model is usually trained with concatenated texts from the very same dataset TopicTiling is tested on. We showed that it is very difficult to ensure that the two sets are always truly disjoint. The reason is that concatenated texts normally always have identical parts. This problem is solved by applying a different training and testing method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The more fine grained micro level was segmented using BayesSeg, a hierarchical algorithm which we provided with the expected number of boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We used three different evaluation metrics and presented a scalable random segmentation algorithm to establish upper and lower bounds for baseline comparison. The results, especially on the macro level, demonstrate that text segmentation algorithms have evolved enough to be used for the automatic generation of e-learning courses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "An interesting aspect of future research would be the application and creation of real e-learning content. Based on the textual segments, summarization and question generation algorithms as well as automatic replacement with appropriate pictures and videos instead of text could be used to finally evaluate an automatically generated elearning course with real learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Regarding text segmentation in general, future research especially needs to address the difficult task of transparently and equally measuring the performance of segmentation algorithms. Our results, i.e., the ones from the random segmentation algorithm, indicate that there are still unsolved issues regarding the penalization of false positives and false negatives when the number of topics or sentences per topic is changed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://en.wikipedia.org/wiki/Wikipedia:Featured_arti cles",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://gist.github.com/alexbowe/879414",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/cfournie/segmentation.evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical Models for Text Segmentation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Beeferman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1999,
"venue": "Mach. Learn",
"volume": "34",
"issue": "1-3",
"pages": "177--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beeferman, D., Berger, A. & Lafferty, J., 1999. Statistical Models for Text Segmentation. Mach. Learn., #feb#, 34(1-3), pp. 177-210.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural Language Processing with Python. s.l",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bird, S., Klein, E. & Loper, E., 2009. Natural Language Processing with Python. s.l.:O'Reilly Media.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIA: an intelligent advisor for e-learning",
"authors": [
{
"first": "N",
"middle": [],
"last": "Capuano",
"suffix": ""
}
],
"year": 2009,
"venue": "Interactive Learning Environments",
"volume": "17",
"issue": "3",
"pages": "221--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Capuano, N. et al., 2009. LIA: an intelligent advisor for e-learning. Interactive Learning Environments, 17(3), pp. 221-239.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Advances in Domain Independent Linear Text Segmentation",
"authors": [
{
"first": "F",
"middle": [
"Y Y"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choi, F. Y. Y., 2000. Advances in Domain Independent Linear Text Segmentation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bayesian Unsupervised Topic Segmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisenstein, J. & Barzilay, R., 2008. Bayesian Unsupervised Topic Segmentation. Honolulu, Hawaii, Association for Computational Linguistics, pp. 334-343.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Evaluating Text Segmentation using Boundary Edit Distance",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fournier",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fournier, C., 2013. Evaluating Text Segmentation using Boundary Edit Distance. Stroudsburg, PA, USA, Association for Computational Linguistics, p. To appear.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Segmentation Similarity and Agreement",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fournier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "152--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fournier, C. & Inkpen, D., 2012. Segmentation Similarity and Agreement. Montreal, Canada, Association for Computational Linguistics, pp. 152-161.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discourse Segmentation of Multi-party Conversation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "562--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galley, M., McKeown, K., Fosler-Lussier, E. & Jing, H., 2003. Discourse Segmentation of Multi-party Conversation. Stroudsburg, PA, USA, Association for Computational Linguistics, pp. 562-569.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Finding scientific topics",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, T. L. & Steyvers, M., 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, April, 101(Suppl. 1), pp. 5228-5235.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1997,
"venue": "#mar#",
"volume": "23",
"issue": "",
"pages": "33--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. A., 1997. TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages. Comput. Linguist., #mar#, 23(1), pp. 33-64.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Applying Machine Learning to Text Segmentation for Information Retrieval",
"authors": [
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, X. et al., 2002. Applying Machine Learning to Text Segmentation for Information Retrieval. s.l.:s.n.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The ICSI Meeting Corpus. s.l., s.n., pp. I-364--I-367",
"authors": [
{
"first": "A",
"middle": [],
"last": "Janin",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janin, A. et al., 2003. The ICSI Meeting Corpus. s.l., s.n., pp. I-364--I-367 vol.1.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Linear Segmentation and Segment Significance. s.l., s.n",
"authors": [
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Klavans",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "197--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kan, M.-Y., Klavans, J. L. & McKeown, K. R., 1998. Linear Segmentation and Segment Significance. s.l., s.n., pp. 197-205.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic keyphrase extraction from scientific articles",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Medelyan",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Resources and Evaluation",
"volume": "47",
"issue": "3",
"pages": "723--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, S., Medelyan, O., Kan, M.-Y. & Baldwin, T., 2013. Automatic keyphrase extraction from scientific articles. Language Resources and Evaluation, 47(3), pp. 723-742.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On Evaluation Methodologies for Text Segmentation Algorithms",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lamprier",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Amghar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Levrat",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Saubion",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lamprier, S., Amghar, T., Levrat, B. & Saubion, F., 2007. On Evaluation Methodologies for Text Segmentation Algorithms. s.l., s.n., pp. 19-26.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An Automatic Course Generation System for Organizing Existent Learning Objects Using Particle Swarm Optimization",
"authors": [
{
"first": "Y.-T",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "S.-C",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "J.-T",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Y.-M. ; M",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "565--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Y.-T., Cheng, S.-C., Yang, J.-T. & Huang, Y.- M., 2009. An Automatic Course Generation System for Organizing Existent Learning Objects Using Particle Swarm Optimization. In: M. Chang, et al. Hrsg. Learning by Playing. Game-based Education System Design and Development. s.l.:Springer Berlin Heidelberg, pp. 565-570.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Text Segmentation via Topic Modeling: An Analytical Study",
"authors": [
{
"first": "H",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yvon",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Jose",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Cappe",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "1553--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Misra, H., Yvon, F., Jose, J. M. & Cappe, O., 2009. Text Segmentation via Topic Modeling: An Analytical Study. New York, NY, USA, ACM, pp. 1553-1556.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Critique and Improvement of an Evaluation Metric for Text Segmentation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "#mar#",
"volume": "28",
"issue": "",
"pages": "19--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pevzner, L. & Hearst, M. A., 2002. A Critique and Improvement of an Evaluation Metric for Text Segmentation. Comput. Linguist., #mar#, 28(1), pp. 19-36.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "TopicTiling: A Text Segmentation Algorithm Based on LDA",
"authors": [
{
"first": "M",
"middle": [],
"last": "Riedl",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riedl, M. & Biemann, C., 2012. TopicTiling: A Text Segmentation Algorithm Based on LDA. Stroudsburg, PA, USA, Association for Computational Linguistics, pp. 37-42.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Quality Control in Large Annotation Projects Involving Multiple Judges: The Case of the TDT Corpora",
"authors": [
{
"first": "S",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Martey",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cieri",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strassel, S., Graff, D., Martey, N. & Cieri, C., 2000. Quality Control in Large Annotation Projects Involving Multiple Judges: The Case of the TDT Corpora. s.l., s.n.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Text Segmentation with LDA-based Fisher Kernel",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "269--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sun, Q., Li, R., Luo, D. & Wu, X., 2008. Text Segmentation with LDA-based Fisher Kernel. Stroudsburg, PA, USA, Association for Computational Linguistics, pp. 269-272.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Pedagogical Ontology as a Playground in Adaptive Elearning Environments",
"authors": [
{
"first": "C",
"middle": [],
"last": "Swertz",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swertz, C. et al., 2013. A Pedagogical Ontology as a Playground in Adaptive Elearning Environments..",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "GI",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1955--1960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "s.l., GI, pp. 1955-1960.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Design and Application of an Automatic Course Generation System for Large-Scale Education",
"authors": [
{
"first": "X",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ullrich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "607--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tan, X., Ullrich, C., Wang, Y. & Shen, R., 2010. The Design and Application of an Automatic Course Generation System for Large-Scale Education. s.l., s.n., pp. 607-609.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Statistical Model for Domain-independent Text Segmentation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "499--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Utiyama, M. & Isahara, H., 2001. A Statistical Model for Domain-independent Text Segmentation. Stroudsburg, PA, USA, Association for Computational Linguistics, pp. 499-506.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Segmentation of Expository Texts by Hierarchical Agglomerative Clustering. s.l.:s.n",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Yaari",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaari, Y., 1997. Segmentation of Expository Texts by Hierarchical Agglomerative Clustering. s.l.:s.n.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 1."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Schema for corpus samples: left and right Wikipedia articles with sections and paragraphs, in the middle three samples, dashed rectangle is a macro sample and dashed circles are micro samples. Filled squares indicate topic boundaries in the macro sample and filled circles in the micro samples."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "TT performance for different window sizes with 30-fold cross validation."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "TT performance for different folds and window size set to 9."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Mean number of predicted boundaries by TT for different window sizes and an LDA topic model trained with 30 folds."
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performance of TT on the macro dataset."
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "."
},
"FIGREF8": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performance of BS on the micro dataset."
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">ID KOs per sam-</td><td>mean sentences per</td></tr><tr><td>ple</td><td/><td>KO</td></tr><tr><td>3</td><td>3</td><td>9</td></tr><tr><td>4</td><td>4</td><td>8</td></tr><tr><td>5</td><td>5</td><td>7</td></tr><tr><td>6</td><td>6</td><td>7</td></tr></table>",
"num": null,
"html": null,
"text": "Macro dataset and its subsets each with 200 samples."
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Mean size and standard deviation of truly disjunctive LDA training and respective TT testing set."
}
}
}
}