id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_6900 | If π is greater or equal to 1, the refining procedure fixes the whole best solution and the residual problem is then empty. | π is set to π min if a better solution is found in order to challenge G and improve this solution. | contrasting |
train_6901 | Because the recording and the post-recording annotation process are expensive tasks, the recording length of such a corpus has to be as short as possible. | in order to train a domain-specific dependency parser, the covering of POS sequences may be useful for increasing the diversity of syntax patterns. | contrasting |
train_6902 | This seems to contradict the idea that the performance of LamSCP improves as the search area becomes wider. | the distributions of the units to cover in Gutenberg and Le-Monde are different, and the variation on the length of the sentences in Le-Monde is very high, which may account for this slight difference in terms of gain. | contrasting |
train_6903 | As previously observed for both algorithms, their computation times grow when the number of required covering features increases. | the ratio between the computation time of LamSCP and ASA does not behave as in Experiment 2 (see Table 7): For the k-covering of 1-POS, this ratio increases from 290 (±11) to 657 (± 43) when k goes from 1 to 5, and for the k-covering of 2-POS, it increases from 75 (±5) to 108 (±6). | contrasting |
train_6904 | Sporleder and Lascarides (2005) included other features (e.g., words and their stems, Part-of-Speech [POS] tags, positions, segment lengths) in a boosting-based classifier (i.e., BoosTexter [Schapire and Singer 2000]) to further improve relation classification accuracy. | these studies evaluated classification performance on the instances where rhetorical relations were originally signaled (i.e., the discourse cues were artificially removed), and did not verify how well this approach performs on the instances that are not originally signaled. | contrasting |
train_6905 | That approach has the advantages of making the parsing process easier, and the model gets more data to learn from. | for a solution like ours, which tries to capture the interdependencies between constituents, this would be problematic with respect to scalability and inappropriate because of two modeling issues. | contrasting |
train_6906 | Similarly, the relative distributions of Background, Contrast, Cause, and Explanation are different in the two parsing scenarios. | different kinds of features are applicable and informative for intraversus multi-sentential parsing. | contrasting |
train_6907 | Algorithm 1 describes how CODRA generates the unit sequences at different levels of the candidate DTs for a given number of EDUs in a sentence. | specifically, to compute the probability of a DT constituent in doing so, it may generate some duplicate sequences. | contrasting |
train_6908 | Discourse cues (e.g., because, but), when present, signal rhetorical relations between two text segments, and have been used as a primary source of information in earlier studies (Knott and Dale 1994;Marcu 2000a). | recent studies (Hernault et al. | contrasting |
train_6909 | For example, the root of the rhetorical sub-tree spanning over EDUs e 1:2 in Figure 9b is Elaboration-NS. | extraction of these features assumes the presence of labels for the sub-trees, which is not the case when we apply the parser to a new text (sentence or document) in order to build its DT in a nongreedy fashion. | contrasting |
train_6910 | In addition to these features, we also experimented with other features including WordNet-based lexical semantics, subjectivity, and TF.IDF-based cosine similarity. | because such features did not improve parsing performance on our development set, they were excluded from our final set of features. | contrasting |
train_6911 | For example, we observe that over 12% of the sentences in the instructional corpus of Subba and Di-Eugenio (2009) have leaky boundaries. | we notice that in most cases where DT structures violate sentence boundaries, its units are merged with the units of its adjacent sentences, as in Figure 13b. | contrasting |
train_6912 | If the unlexicalized version is also found to be rare, other variations of the production, depending on whether they include the lexical heads and how many non-terminals (one or two) they consider before and after the potential boundary, are examined one after another (see Fisher and Roark [2007] for details). | we compute the maximum likelihood estimates for a primary production (feature) and its other variations, and use those directly as features with/without binarizing the values. | contrasting |
train_6913 | In RST-DT, by our count, 7, 321 out of 7, 673 sentences in the training set, 951 out of 991 sentences in the test set, and 1, 114 out of 1, 208 sentences in the doubly-annotated set have a well-formed DT. | 3, 032 out of 3, 430 sentences in the instructional corpus have a well-formed DT. | contrasting |
train_6914 | This demonstrates the potential of TSP SW for data sets with even more leaky boundaries, e.g., the Dutch (Vliet and Redeker 2011) and the German Potsdam (Stede 2004) corpora. | it would be interesting to see how other heuristics to do consolidation in the cross condition (Section 4.3. | contrasting |
train_6915 | Some other measures, namely, all α from Krippendorff, and the new γ introduced in this article, are computed from observed and expected disagreements (instead of agreements), denoted here respectively D o and D e , and they define the final agreement by Equation (2). | a o − a e 1 − a e (1) the way the expected value is computed is the only difference between many coefficients (κ, S, π, and their generalizations), and is a controversial question. | contrasting |
train_6916 | In zone (1) of the left part of Figure 4, one annotator has created two units (of the same category), and the other annotator has created only one unit covering the same space. | once the continua are discretized, the two annotators seem to agree on this zone (with the four same atoms), as we can see in the right part of the figure . | contrasting |
train_6917 | Three examples of split and permutation are shown in the right part of the figure, for split positions of, respectively, 15, 24, and 38, all coming from the same real continuum, with units that are no longer aligned (except by chance). | we have to address the fact that some units may intersect with themselves, generating some part of agreement beyond chance. | contrasting |
train_6918 | A consequence of this approach is illustrated in Figure 15, where two annotators perfectly agree both on positions and categories in the experiment on the left, and still perfectly agree on position but slightly diverge concerning categories in the experiment on the right (1/2, 6/7, and 8/9 are assumed to be close categories). | u α drops from 1 in the left experiment to -0.34 (a negative value means worse than random) in the right experiment, despite, in the latter, the positions being all correct, and the categories being quite good, since c|u α = 0.85. | contrasting |
train_6919 | In the current version of the CST, the false positive error type creates some overlapping (new units may overlap), and it is the reason why u α and κ d were discarded from this experiment. | we have kept c|u α because it behaves quite well despite overlapping units. | contrasting |
train_6920 | Admittedly, c|u α was not designed to handle these configurations (and so should not be included in this experiment), but surprisingly it seems to perform in rather the same way as it does with no overlapping; this must be investigated further, but judging from this preliminary observation, it seems this coefficient could still be operational and useful in such cases. | u α does not handle correctly this experiment and so was not included in the graph. | contrasting |
train_6921 | Due to the inherent difficulty in obtaining the true value of h * from a text, however, these arguments are based only on indirect clues with respect to convergence. | hilberg conjectured a decrease in the human conditional entropy, as follows (hilberg 1990): he obtained this through an examination of Shannon's original experimental data and suggested that β ≈ 0.5. | contrasting |
train_6922 | For corpus RongoA-c, we consider a character inclusive of all adjoining parts (i.e., including accents and ornamental parts). | for corpus RongoB-c, we separate parts as reasonably as possible, among multiple possible separation methods. | contrasting |
train_6923 | Whether a value for h * is reached asymptotically and also whether h * > 0 remain important questions requiring separate, more extensive mathematical and empirical studies. | h 2 (or Yule's K, bottom graphs) showed convergence, already at the level of 10 5 tokens, for both words and characters. | contrasting |
train_6924 | Given this result, it is doubtful that V is convergent across languages. | h 1 is mathematically proven to be convergent given infinite-length randomized data, but to larger values than those of the original texts, as mentioned in Section 3.2. | contrasting |
train_6925 | Typically, statistical machine translation (SMT) systems (Chiang 2007;Koehn 2010) perform generation into the target language as part of an integrated system, which avoids the high computational complexity of independent word ordering. | performing word ordering separately in a pipeline has many potential advantages. | contrasting |
train_6926 | In our previous papers (Zhang and Clark 2011;Zhang, Blackwood, and Clark 2012), we applied a set of beams to this structure, which makes it similar to the data structure used for phrase-based MT decoding (Koehn 2010). | we will show later that this structure is unnecessary when the model has more discriminative power, and a conceptually simpler single beam can be used. | contrasting |
train_6927 | For CKY decoding, the model is used to compare hypotheses within each chart cell, which cover the same input words. | for the best-first search decoder, the model is used to order hypotheses on the agenda, which can cover different numbers of words. | contrasting |
train_6928 | In our previous papers we observed empirical convergence of online learning using this linear model, and obtained competitive results. | as explained in Section 2, only positive examples were expanded during training, and the expansion of negative examples led to non-convergence and made online training infeasible. | contrasting |
train_6929 | In both the first and second case, a gold-standard edge is pruned as the result of the expansion of a negative example. | in order for the gold-standard goal edge to be constructed, all gold-standard edges that have been expanded must remain in the chart. | contrasting |
train_6930 | However, SEARN is more oriented to greedy search, optimizing local decisions. | our framework is oriented to best-first search, optimizing global structures. | contrasting |
train_6931 | In recent years, many studies have been published on data collected from social media, especially microblogs such as Twitter. | rather few of these studies have considered evaluation methodologies that take into account the statistically dependent nature of such data, which breaks the theoretical conditions for using cross-validation. | contrasting |
train_6932 | This is what most previous studies have done. | this result is overoptimistic. | contrasting |
train_6933 | Recent years have seen a growing interest in computational modeling of metaphor, with many new statistical techniques opening routes for improving system accuracy and robustness. | the lack of a common task definition, shared data set, and evaluation strategy makes the methods hard to compare, and thus hampers our progress as a community in this area. | contrasting |
train_6934 | These include, most notably, the comparison view, formulated in the Structure-Mapping Theory of Gentner (1983), and the interaction view (Black 1962;Hesse 1966). | the principles of CMT have inspired and influenced much of the computational work on metaphor, thus becoming more central to this paper. | contrasting |
train_6935 | The answer to this question most likely depends on the NLP application in mind. | generally speaking, real-world NLP applications are unlikely to be concerned with historical aspects of metaphor, but rather with the identification of figurative language that needs to be interpreted differently from the literal language. | contrasting |
train_6936 | This means it needs to be either data-driven and be able to automatically acquire the knowledge it needs from text corpora, or rely only on large-scale, general-domain lexical resources (that are already in existence and do not need to be created in a costly manner). | it would be an advantage if no such resource is required and the system can dynamically induce meanings in context. | contrasting |
train_6937 | The resource has been criticized for the lack of clear structuring principles of the mapping ontology (Lönneker-Rodman 2008). | to date MML is the most comprehensive resource for conceptual metaphor in the linguistic literature, and the examples from the list have been used by computational approaches (Mason 2004;Krishnakumaran and Zhu 2007;Li, Zhu, and Wang 2013), both for development and evaluation purposes. | contrasting |
train_6938 | The improvement over Turney's evaluation set-up was the annotation of complete sentences rather than isolated phrases. | it should be noted that the system was evaluated on selected examples rather than continuous text. | contrasting |
train_6939 | They evaluated the method using 10-fold cross-validation and report an F-score of 0.75. | they did not evaluate their system on metaphorical language independently. | contrasting |
train_6940 | The results are encouraging and show that porting coarse-grained semantic knowledge across languages is feasible. | it should be noted that the generalization to coarse semantic features inevitably only captures shallow behavior of metaphorical expressions in the data and bypasses conceptual information. | contrasting |
train_6941 | The authors presented some interesting examples of conceptual metaphors the system extracted, which they claim may foster critical thinking in social science. | they did not carry out any quantitative evaluation. | contrasting |
train_6942 | Shutova (2010) tested her system only on metaphors expressed by a verb and reports an accuracy of 0.81, as evaluated on top-ranked paraphrases produced by the system. | she used WordNet for supervision, which limits the number and range of paraphrases that can be identified by her method. | contrasting |
train_6943 | On one hand, such violations are indicative of any kind of non-literalness (i.e., not only metaphor, but also, for instance, metonymy) or anomaly in language and the approach is likely to overgenerate. | in the case of most conventional metaphors that are highly frequent, no statistically significant violation can be detected in the data, and the approach would bypass many such metaphors. | contrasting |
train_6944 | Such a technique attained a precision of 0.17 and a recall of 0.55, suggesting that the selectional preference violation hypothesis does not port well beyond handcrafted descriptions to large-scale, data-driven techniques. | other, "non-violation" applications of selectional preferences have been fruitful in metaphor modeling. | contrasting |
train_6945 | Exploiting the wider topical structure of text is a promising avenue for metaphor processing. | one needs to keep in mind that distributional similaritybased methods risk assigning frequent metaphors to target domains (as is the case for other semantic violation-based methods). | contrasting |
train_6946 | Based on the results of these experiments, concreteness is likely to be a practically useful feature for metaphor processing. | it should be noted that Turney's hypothesis (that target words tend to be abstract and source words tend to be concrete) explains only a fraction of metaphors and does not always hold. | contrasting |
train_6947 | Another benefit of this type of evaluation is that it allows one to assess both the precision and the recall of the system. | only two of the presented approaches (Dunn 2013b; Shutova 2013) conducted this type of evaluation, as shown in Table 7. | contrasting |
train_6948 | That is, mentioning Iraq in the second sentence is not necessary (for a human being) to understand the meaning of the text. | making both references explicit, as shown in Example 2, would be redundant and could lead to the perception that the text is merely a concatenation of two independent sentences -rather than a set of adjacent sentences that form a meaningful, or coherent, discourse. | contrasting |
train_6949 | Each of these tasks focuses, however, on a different level of linguistic analysis from ours: Following the definitions embraced by Recasens and Vila (2010), "paraphrasing" is a relation between two lexical units that have the same meaning, whereas "coreference" indicates that two referential expressions point to the same referent in discourse. | 3 to work on paraphrasing, we are specifically interested in pairs of text fragments that involve implicit arguments, which can only be resolved in context. | contrasting |
train_6950 | The difficulty of this task can also be seen in the results for the Greedy model, which only achieves an F 1 -score of 17.2%. | we observe that the majority of all sure alignments can be retrieved by applying the LemmaId model (60.3% recall). | contrasting |
train_6951 | There are two main reasons for this: On the one hand, their model makes use of much larger resources to compute alignments, including a paraphrasing database that contains over 7 million rewriting rules; on the other hand, their model is supervised and makes use of additional data to learn weights for each of their features. | full and EMNLP'12 only make use of a small development data set to determine a threshold for graph construction. | contrasting |
train_6952 | We found that annotators made use of the full rating scale, with the extremes indicating either a strong preference for the text on the left-hand side or the righthand side, respectively. | most ratings were concentrated more towards the center of the scale (i.e., around zero). | contrasting |
train_6953 | 2009; Levy and Goldberg 2014) that models learning from input informed by dependency parsing, rather than simple running-text input, yield improved similarity estimation and, specifically, clearer distinction between similarity and association. | we find no evidence for a related hypothesis (Agirre et al. | contrasting |
train_6954 | Using the output of these two models as input to a logistic regression classifier, Turney predicts whether two concepts are associated, similar, or both, with 61% accuracy. | in the absence of a gold standard covering the full range of similarity ratings (rather than a list of pairs identified as being similar or not) Turney cannot confirm directly that the similarityfocused model does indeed effectively quantify similarity. | contrasting |
train_6955 | We have established the validity of similarity as a notion understood by human raters and distinct from association. | much theoretical semantics focuses on relations between words or concepts that are finer-grained than similarity and association. | contrasting |
train_6956 | Hyper/hyponym pairs that are separated by fewer levels in the WordNet hierarchy are both more strongly associated and rated as more similar. | there are also interesting discrepancies between similarity and association. | contrasting |
train_6957 | (2012) model to focus on association rather than similarity. | the true explanation may be less simple, since the Huang et al. | contrasting |
train_6958 | While the NLM is the strongest performer on WS-353, SVD is the strongest performer on MEN. | the NLM model performs notably better than the alternatives at modeling similarity, as measured by SimLex-999. | contrasting |
train_6959 | This aligns with the (also unexpected) observation that humans rate the similarity of adjectives more consistently and with more agreement than other parts of speech (see the dashed lines). | the parallels between human raters and the models do not extend to verbs and nouns; verb similarity is rated more consistently than noun similarity by humans, but models estimate these ratings more accurately for nouns than for verbs. | contrasting |
train_6960 | In particular, for models to learn high-quality representations for all linguistic concepts, we believe that future work must uncover ways to explicitly or implicitly infer "deeper," more general, conceptual properties such as intentionality, polarity, subjectivity, or concreteness (Gershman and Dyer 2014). | although improving corpusbased models in this direction is certainly realistic, models that learn exclusively via the linguistic modality may never reach human-level performance on evaluations such as SimLex-999. | contrasting |
train_6961 | 2013) or GloVe (Pennington, Socher, and Manning 2014). | this is not actually building Deep Learning models, and I hope in the future that more people focus on the strongly linguistic question of whether we can build meaning composition functions in Deep Learning systems. | contrasting |
train_6962 | Shortly after the release of Chinese-English and English-Chinese translation services, we also released translation services between Chinese and Japanese, Korean, and other daily-used foreign languages. | with translation directions expanded, users' expectations for the translation between the resource-poor languages became higher and higher. | contrasting |
train_6963 | There are also researchers who regard query reformulation as the translation from the original query to the rewritten one (Riezler and Liu 2010). | what interests me the most is the encounter between translation technology and Chinese traditional culture. | contrasting |
train_6964 | Other people all thought it was impossible and laughed at him. | yugong said to the people calmly: "Even if I die, I have children; and my children would have children in the future. | contrasting |
train_6965 | Machine translation (MT) has long been both one of the most promising applications of natural language processing technology and one of the most elusive. | over approximately the past decade, huge gains in translation accuracy have been achieved (Graham et al. | contrasting |
train_6966 | With regard to the first distinction, local features, such as phrase translation probabilities, do not require additional contexts from other partial derivations, and they are computed independently from one another. | when features for a particular phrase pair or synchronous rule cannot be computed independently from other pairs, they are called non-local features. | contrasting |
train_6967 | Dense features are generally easier to optimize, both from a computational point of view because the smaller number of features reduces computational and memory requirements, and because the smaller number of parameters reduces the risk of overfitting. | sparse features allow for more flexibility, as their parameters can be directly optimized to increase translation accuracy, so if optimization is performed well they have the potential to greatly increase translation accuracy. | contrasting |
train_6968 | More formally, we can cast the problem as minimizing the expectation of (•), or risk minimization:ŵ Here, Pr(F, E) is the true joint distribution over all sets of input and output sentences that we are likely to be required to translate. | in reality we will not know the true distribution over all sets of sentences a user may ask us to translate. | contrasting |
train_6969 | Intuitively, if λ is set to a small value, optimization will attempt to learn a w that effectively minimizes loss on the training data, but there is a risk of overfitting reducing generalization capability. | if λ is set to a larger value, optimization will be less aggressive in minimizing loss on the training data, reducing over-fitting, but also possibly failing to capture useful information that could be used to improve accuracy. | contrasting |
train_6970 | Converting this to a loss function that is dependent on the model parameters, we obtain the following loss expressing the error over the 1-best results obtained by decoding in Equation 1: Error has the advantage of being simple, easy to explain, and directly related to translation performance, and these features make it perhaps the most commonly used loss in current machine translation systems. | it also has a large disadvantage in that the loss function expressed in Equation 17is not convex, and most MT evaluation measures used in the calculation of the error function error(•) are not continuously differentiable. | contrasting |
train_6971 | If this error function can be composed as the sum of sentence-level errors, such as BLEU+1, choosing the oracle is simple; we simply need to find the set of candidates that have the lowest error independently sentence by sentence. | 6 1: procedure ORACLE( F, E, C ) 2: 3: repeat 5: if s < s then Update the oracle 11: else if s = s then Same error value 14: return O 20: end procedure when using a corpus-level error function we need a slightly more sophisticated method, such as the greedy method of Venugopal and Vogel (2005). | contrasting |
train_6972 | In standard approaches to batch learning, for every training example f (i) , e (i) we enumerate every translation and derivation in the respective sets E ( f (i) ) and D( f (i) ), and attempt to adjust the parameters so we can achieve the translations with the lowest error for the entire data. | as mentioned previously, the entire space of derivations is too large to handle in practice. | contrasting |
train_6973 | One of the major advantages of online methods is that updates are performed on a much more fine-grained basis-it is often the case that online methods converge faster than batch methods, particularly on larger data sets. | online methods have the disadvantage of being harder to implement (they often must be implemented inside the decoder, whereas batch methods can be separate), and also generally being less stable (with sensitivity to the order in which the training data is processed or other factors). | contrasting |
train_6974 | We then decode each source sentencẽ f ( j) of the mini-batch and generate a k-best list (line 7), which is used in optimization (line 9). | to the batch learning algorithm in Figure 3, we do not merge the k-bests from previous iterations. | contrasting |
train_6975 | In the entirety of this article, we have assumed that optimization for MT aims to reduce MT error defined using an evaluation measure, generally BLEU. | as mentioned in Section 2.5, evaluation of MT is an active research field, and there are many alternatives in addition to BLEU. | contrasting |
train_6976 | From these statistics we can see that even after over ten years, MERT is still the dominant optimization algorithm. | starting in WMT 2013, we can see a move to systems based on MIRA, and to a lesser extent ranking, particularly in the most competitive systems. | contrasting |
train_6977 | The fact that algorithms other than MERT are seeing adoption in competitive systems for shared tasks is a welcome sign for the future of MT optimization research. | there are still many open questions in the field, a few of which can be outlined here: Stable Training with Millions of Features: At the moment, there is still no stable training recipe that has been widely proven to effectively optimize millions of features. | contrasting |
train_6978 | From a theoretical point of view, it can be seen as incorporating discriminative training techniques when working with a generative model by optimizing for segmentation performance rather than maximum a posteriori probability. | only the hyperparameters are optimized in this fashion, whereas the lexicon parameters are still learned within the generative model framework. | contrasting |
train_6979 | Because acquiring the posteriors analytically is intractable, inference is performed utilizing Markov chain Monte Carlo algorithms to obtain samples from the posterior distributions of interest (Johnson 2008;Johnson and Goldwater 2009;Johnson and Demuth 2010;Sirts and Goldwater 2013). | as sampling-based models are costly to train on large amounts of data, we adopt the parsing-based method proposed in Sirts and Goldwater (2013) to use the trained AG model inductively on test data. | contrasting |
train_6980 | Both generative and discriminative models can be extended to utilize annotated as well as unannotated data in a semi-supervised manner. | the applicable techniques differ. | contrasting |
train_6981 | The log-linear model presented by Poon, Cherry, and Toutanova (2009) is omitted because it does not have a freely available implementation. | the model has been compared in the semi-supervised learning setting on Arabic and Hebrew with CRFs and Morfessor previously by Ruokolainen et al. | contrasting |
train_6982 | Interpreting precision and recall requires some care as it is always possible to reduce over-segmentation errors by segmenting less and, conversely, to reduce undersegmentation errors by segmenting more. | if this is taken into account, the error categorization can be quite informative. | contrasting |
train_6983 | These two grammar versions have no difference when trained transductively. | when training an inductive model, it may be beneficial to store the subtrees corresponding to whole words because these trees can be used to parse the words in the test set that were seen during training with a single rule. | contrasting |
train_6984 | Morfessor appears to favor precision over recall (see Finnish) in the event a trade-off takes place. | the AG heavily favors recall (see English). | contrasting |
train_6985 | 9 Intuitively, WORDS should yield zero recall. | when applying macro averaging, a word having a gold standard analysis with no boundaries yields a zero denominator and is therefore undefined. | contrasting |
train_6986 | Meanwhile, the second extension AG SELECT (SSV) also results in overall higher precision by reducing over-segmentation of STEM segments substantiallyalthough, for Finnish, SUFFIX is oversegmented compared with AG SELECT (USV). | whereas both AG (SSV) and AG SELECT (SSV) improve recall on Finnish compared to AG (USV), only AG (SSV) succeeds in improving recall for English. | contrasting |
train_6987 | This result again supports the intuition that in order to learn the open categories, one is required to utilize large amounts of word forms for learning. | it appears that the necessary information can be extracted from unannotated word forms. | contrasting |
train_6988 | This approach was taken by Poon, Cherry, and Toutanova (2009), Spiegler and Flach (2010), and Sirts and Goldwater (2013). | as discussed in Section 3.3.1, for the Morfessor family the fixing approach was outperformed by the weighted objective function (Kohonen, Virpioja, and Lagus 2010). | contrasting |
train_6989 | The online learning algorithms that discard each new training sample after updating the learner are also referred to as incremental learning algorithms by some authors (see Anthony and Biggs [1992]). | this constraint can be relaxed by using mini-batches (small sets of samples). | contrasting |
train_6990 | The RRR measure reflects how frequently unseen n-grams are repeated in the corpus to be translated. | such unseen n-grams constitute only a fraction of the in-domain corpus. | contrasting |
train_6991 | Plots were obtained for the XRCE and Europarl training corpora and the three translation directions (from English to Spanish, French, and German). | in the figure only the XRCE English to Spanish (Figure 2a) and the Europarl English to Spanish (Figure 2b) results are reported 7 (very similar results were obtained for the other language pairs). | contrasting |
train_6992 | It is expected that updating the system in a sentence-wise manner will produce the best results. | this updating strategy poses efficiency problems because of the necessity of executing model updates in real time. | contrasting |
train_6993 | EM convergence experiments provided in Section 4.4 showed that the log-likelihood of HMM-based word alignment models using the incremental version of the EM algorithm is competitive with that obtained by using the conventional version. | it is still unclear if the use of online learning will cause a degradation in the quality of the translations with respect to the use of batch learning. | contrasting |
train_6994 | Hopefully, this modification will allow the system to provide more accurate predictions for similar samples. | modifying parameters may also produce lateral effects. | contrasting |
train_6995 | As seen in Table 6, online learning allowed us to obtain around one point of improvement in the three measures under consideration with respect to the conventional system (without retraining). | the improvements were not statistically significant in some cases (WER for English to French, BLEU and WER for English to German). | contrasting |
train_6996 | Online learning has been a main topic of research in the field of machine learning. | in the SMT framework, the vast majority of the work has been devoted to the study of the batch-learning setting. | contrasting |
train_6997 | As we can see, interlaced updates increased the training time with respect to that of basic online updates (this was the expected outcome since five samples were processed at each trial instead of one). | the time costs of interlaced updates were still affordable for both corpora (worst case times of a few seconds, median times less than 1 second for the different values of R). | contrasting |
train_6998 | This example suggests a simple division of reordering patterns into long range, or global, and short range, or local. | other language pairs display more complex, hierarchical patterns. | contrasting |
train_6999 | This happens indirectly, through the scoring of target word n-grams, which are generated by translating the source positions in different orders. | the fixed-size context of language models used in SMT (typically four or five words) makes them largely insensitive to global reordering phenomena. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.