id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_7200 | The HDP model for the root nodes (on the right) illustrates the generation of words yelling, talking, repairs. | to the DPs in the child nodes, in the HDP model, the stems/suffixes are shared across all root HDPs. | contrasting |
train_7201 | Although we assume a stem+suffix segmentation, other types of segmentation, such as prefix+stem, are also covered. | stem alterations and infixation are not covered in our model. | contrasting |
train_7202 | We say yes to the constant flow of peer-review journal publications and their impact; (...) All of us are in this game, too. | we maintain that this cannot be all. | contrasting |
train_7203 | As a result, other formalisms that were considerably more expressive than TAG have laid claim to the Joshian mantle of Mild Context Sensitivity, showing that it has become a highly influential meme in the field. | it has since been shown that the artificial permutation-complete language MIX3, consisting of all permutations over the strings of the TAL a n b n c n , is a Multiple Context Free Language (MCFL). | contrasting |
train_7204 | I think the surveyed papers support this use of BLEU, since most of the system-level BLEU-human correlations for MT reported in the survey are Medium or High (Figure 5). | the evidence does not support using BLEU to evaluate other types of NLP systems (outside of MT), and it does not support using BLEU to evaluate individual texts rather than NLP systems. | contrasting |
train_7205 | Ensemble methods using multiple classifiers have proven to be among the most successful approaches for the task of Native Language Identification (NLI), achieving the current state of the art. | a systematic examination of ensemble methods for NLI has yet to be conducted. | contrasting |
train_7206 | Random forests provide the biggest boost, improving performance by almost 10%. | this is still lower than our LDA meta-classifier. | contrasting |
train_7207 | 29 29 Ionescu, Popescu, and Cahill (2016) do carry out experiments on raw documents from the Norwegian ASK corpus, and reach accuracies up to 68%. | their set-up is rather different, following Pepper (2012). | contrasting |
train_7208 | For instance, in applications such as parsing or translation, we work with a fixed grammar, so it might seem that the universal recognition problem is of little practical relevance. | it is worth remembering that for these applications, we are primarily interested in the structural descriptions that the grammar assigns to a generated sentence, not in the membership of the sentence per se. | contrasting |
train_7209 | A derivation for this sentence is shown in the upper half of Figure 3. | the schema cannot be universally active in English, as this would cause the grammar to also accept strings such as *Kahn blocked skillfully a powerful by Rivaldo shot, which is witnessed by the derivation in the lower half of Figure 3 (a dagger † marks the problematic step). | contrasting |
train_7210 | In part 1 of the reduction (Section 3.1.2), this allows us to "guess" arbitrary truth assignments for the variables in the clause φ. | the possibility to write grammars with lexical ambiguity is an essential feature of all interesting formalisms for natural language syntax, including also TAG. | contrasting |
train_7211 | When the machine is in an existential state, it accepts the input if there is at least one transition that eventually leads to an accepting state. | when the machine is in a universal state, it accepts input only if every possible transition eventually leads to an accepting state. | contrasting |
train_7212 | These elementary pieces are specially designed to satisfy two useful properties: First, each elementary piece can be stored using an amount of space that does not depend on the length of w. Second, elementary pieces can be shared among different derivations of w under G. The algorithm then uses dynamic programming to construct and store in a multidimensional parsing table all possible elementary pieces pertaining to the derivations of w under G. From such a table one can directly check whether w ∈ L(G). | despite the fact that the number of derivations of w under G can grow exponentially with the length of w, the two properties of elementary pieces allow the algorithm to run in time polynomial in the length of w. the runtime is not bounded by a polynomial function in the size of G, as should be expected from the hardness results reported in Section 3.1. | contrasting |
train_7213 | The computational effect of grammar structure and grammar size on the parsing problem is rather well understood for several formalisms currently used in computational linguistics, including context-free grammar and TAG. | to the best of our knowledge, this problem has not been investigated before for VW-CCG or other versions of CCG; see, for instance, Kuhlmann and Satta (2014) for discussion. | contrasting |
train_7214 | From the computational perspective, ε-entries represent the boundary between the results in Section 3 and Section 4. | because we do not know whether the classes NP and EXPTIME can be separated, we cannot draw any precise conclusion about the role of ε-entries in the parsing problem. | contrasting |
train_7215 | Schabes (1990) reports that the universal recognition problem for TAG can be decided in time O(|G| 2 |w| 6 ), where |G| is the size of the input grammar G and |w| is the length of the input sentence w. One could hope then to efficiently solve the universal recognition problem for VW-CCG by translating an input VW-CCG G into an equivalent TAG G , and then applying to G and the input string any standard recognition method for the latter class. | the part of the equivalence proof by Vijay-Shanker and Weir (1994) showing how to translate VW-CCG to TAG requires the instantiation of a number of elementary trees in G that is exponential in |G|. | contrasting |
train_7216 | Because we can also translate any TAG into an equivalent VW-CCG without blowing up the size of the grammar, following the construction by Vijay-Shanker and Weir (1994), we conclude that VW-CCG is more succinct than TAG. | the price we have to pay for this gain in expressivity is the extra parsing complexity of VW-CCG. | contrasting |
train_7217 | We want to emphasize that our method works in an unsupervised fashion and is not restricted to certain POS classes. | most of the competitive methods require POS filtering as a pre-processing step in order to do their statistics. | contrasting |
train_7218 | Because of comparison reasons, the first evaluation that uses POS filtering (see Section 3.6) is restricted to noun compounds. | the remaining experiments in Sections 3.7 and 3.8 are not restricted to any particular POS. | contrasting |
train_7219 | Only for the P@100, can the word2vec-based method beat the t-test and frequency baselines. | for all other measures, the performance is similar to these baselines or even inferior, and significantly worse than using DRUID with JoBimText. | contrasting |
train_7220 | Figure 2, using solely the DRUID method or the combined variation with the logfrequency lead to the best ranking for the first 1,000 ranked candidates. | both methods are outperformed beyond the first 1,000 ranked candidates by the MFinformed DRUID variations. | contrasting |
train_7221 | and the t-test with stopword filtering, the DRUID method yields the best scores for 6 out of the 32 languages. | if we multiply the logarithmic frequency by the DRUID measure, we gain the best performance for 30 languages. | contrasting |
train_7222 | Extracting the most similar terms that are nested in w results in the first split candidate set, called similar candidate units. | only for few terms do we observe nested candidates in the most similar words. | contrasting |
train_7223 | We have examined schemes of priority ordering for integrating information from different candidate sets-for example, using the similar candidate units first and only applying the other candidate sets if no split was found. | preliminary experiments revealed that it was always beneficial to generate splits based on all three candidate sets and use the geometric mean scoring as outlined above to select the best split as decomposition of a word. | contrasting |
train_7224 | As observed in Table 17, the highest precision using the JoBimText similarities is achieved with the similar candidate units. | the recall is lowest because for many words no information is available. | contrasting |
train_7225 | Interestingly, we observe an opposite trend for word2vec. | the best overall performance is achieved with the generated dictionary, which yields an F1 measure of 0.9583 using JoBimText and 0.9627 using word2vec. | contrasting |
train_7226 | We tried to resolve this issue by recursively splitting words with nested compounds also contained in the data set. | recall changed only marginally. | contrasting |
train_7227 | Comparing two methods for the generation of distributional semantic models within SECOS, we obtain the best results for German, Dutch, and Afrikaans using word2vec. | for Finnish the best results are achieved with JoBimText. | contrasting |
train_7228 | Furthermore, the feature-based hash sampling included only contextual features (in the form of n-gram co-occurrence information), and did not consider orthographic features. | our log-linear model integrates both type-level orthographic features and token-level bigram frequencies. | contrasting |
train_7229 | To address this challenge, we apply IMH sampling, which relies on a proposal distribution and does not require normalization. | finding an appropriate proposal distribution can sometimes be challenging, as it needs to be close to the true distribution for faster mixing and must be easy to sample from. | contrasting |
train_7230 | For the forced expectation, one possibility is to use the bigram language model p(e 1 e 2 ) as a proposal distribution. | the bigram language model did not work well in practice. | contrasting |
train_7231 | r Accept the new sample with the probability: The IMH sampling reduces the complexity of the forced expectation estimation to O(N F N), 1 which is significantly less than the complexity of O(N F VN) in the case of Gibbs sampling. | we could not apply IMH while estimating the full expectation, as finding a suitable proposal distribution is more complicated. | contrasting |
train_7232 | At a high level, given an observed French sentence, it samples a hidden English sequence according to p(e|f) in order to estimate the forced expectation term of the update, and then samples a French sentence according to p(f|e) to estimate the full expectation, as shown in Algorithm 1. | because the individual English words are not independent, due to the bigram language model, the sampling of p(e|f) is itself broken down into a sequence of Gibbs sampling steps, sampling one word at a time while holding the others fixed, as shown in Algorithm 2. | contrasting |
train_7233 | For example, the Spanish word "madre" means "mother" in English, but our model gave the highest score to the English word "made" due to the high orthographic similarity. | such error cases are rare compared with the improvement. | contrasting |
train_7234 | The accuracies reported here are significantly lower than those achieved by modern supervised methods (and unsupervised methods with large corpora). | our results required no more than 1,000 lines of data from each language, and preserved accuracy with as little as 100 lines of data. | contrasting |
train_7235 | One can argue that the antecedents in such cases (i.e., this chapter and this section) are big chunks of text and therefore non-nominal. | though these are certainly interesting cases, we do not focus on them in this article. | contrasting |
train_7236 | Instead, it can only refer to the snake here. | in Example (28b), reference to the situation by that is possible, and due to that prior mention, subsequent reference by it is also possible. | contrasting |
train_7237 | (NYT) Here, the distance between the anaphor and the antecedent is small: The antecedent of this fact occurs in the preceding clause. | in Example (31), the antecedent of this question occurs four sentences away from the anaphor sentence. | contrasting |
train_7238 | For instance, replacing either (d) or (e) in Example (34) with the (constructed) sentence in (e ) will violate the right-frontier constraint, because that in (e ) accesses the closed-off information about House A. | it seems to be a fairly natural continuation of the conversation, especially if it were uttered in the course of a spontaneous conversation that had not been prepared in detail in advance. | contrasting |
train_7239 | Thus the referring function states that that refers to the event of Engine 1 getting to Avon, which takes two hours. | to Eckert and Strube (2000), who present a system design that was not implemented, Byron (2002Byron ( , 2004) presents an implemented system. | contrasting |
train_7240 | They use surface-based features and information that is readily available and easy to gather automatically. | knowledge-poor methods do not tend to be particularly effective, as is evidenced by relatively low recall in general. | contrasting |
train_7241 | Because unification can pass information across unbounded structures, this can be thought of as reducing Merge to Move. | generalized Phrase Structure grammar (gazdar 1981), Tree Adjoining grammar (TAg; Joshi and Levy 1982), and Combinatory Categorial grammar (CCg, Ades and Steedman, 1982) sought to reduce Move to various forms of local merger. | contrasting |
train_7242 | The class of languages characterizable by these formalisms fell within the requirements of what Joshi (1988) called "mild context sensitivity" (MCS), which proposed as a criterion for what could count as a computationally "reasonable" theory of natural languages-informally speaking, that they are polynomially recognizable, and exhibit constant growth and some limit on crossing dependencies. | the MCS class is much much larger than the CCG/TAG languages, including the multiple context free languages and even (under certain further assumptions) the languages of Chomskian minimalism, so it seems appropriate to distinguish TAG and CCG as "slightly noncontext-free" (SNCF, with apologies to the French railroad company). | contrasting |
train_7243 | Also, we do not see the switchboard as a competitor to WebLicht or the LAPPS Grid. | each new predefined workflow in WebLicht can be advertised by and directly invoked from the switchboard, hence directing more user traffic to WebLicht. | contrasting |
train_7244 | If the author did not send the data and/or source code (nor replied that it was not possible to send the requested information), we sent a second and final e-mail on 24 October 2017. | to the first e-mail, this second e-mail was sent to all authors of the paper, and the deadline for sending the information was extended to 19 November 2017. | contrasting |
train_7245 | Grice makes the important assumptions that participants in a discourse are rational agents and that they are governed by cooperative principles. | in some cases involving non-literal readings or negotiation, agents do not always have rational communicative behavior. | contrasting |
train_7246 | For example, "I love that game!" | might be followed by "I love it too," which has a similar structure and proposition as the first utterance: (1) I I love love that it game too the second response might be "I hate that game," in which the contrast between love and hate shows the disalignment. | contrasting |
train_7247 | For example, if you are talking about food, and you say how yummy the french fries are, then the stance focus is the fries and they are evaluated positively. | a focus can also be an act. | contrasting |
train_7248 | Moreover, this ALIGNMENT must have a previous utterance with which to align or disalign. | aFFECT is a comment-internal relationship. | contrasting |
train_7249 | 2015b), public opinions and user behavior understanding on societal issues (Pak and Paroubek 2010;Popescu and Pennacchiotti 2010;Kouloumpis, Wilson, and Moore 2011), and so forth. | the explosive growth of microblog data far outpaces human beings' speed of reading and understanding. | contrasting |
train_7250 | In addition to "topic" modeling, it has also inspired discourse (Crook, Granell, and Pulman 2009;Ritter, Cherry, and Dolan 2010;Joty, Carenini, and Lin 2011) detection without supervision or with weak supervision. | none of the aforementioned work jointly infers discourse and topics on microblog conversations, which is a gap the present article fills. | contrasting |
train_7251 | 2015) combines short text aggregation and topic induction into a unified model. | in SATM, no prior knowledge is given to ensure the quality of text aggregation, which will further affect the performance of topic inference. | contrasting |
train_7252 | For example, to the best of our knowledge, there currently exists no high-quality word embedding corpus for Chinese social media. | to these prior methods, our model does not have the prerequisite to an external resource, whose general applicability in cold-start scenarios is therefore ensured. | contrasting |
train_7253 | 2000;Cohen, Carvalho, and Mitchell 2004;Bangalore, Fabbrizio, and Stent 2006). | dA definition is generally domain-specific and usually involves the manual designs from experts. | contrasting |
train_7254 | Most of these approaches have considered utterances in isolation. | even humans have difficulty sometimes in recognizing sarcastic intent when considering an utterance in isolation (Wallace et al. | contrasting |
train_7255 | For the neural networks models, similar to the results on the IAC v2 data set, the LSTM models that read both the context and the current turn outperform the LSTM model that reads only the current turn (LSTM ct ). | unlike the IAC v2 corpus, for Twitter, we observe that for the LSTM without attention, the single LSTM architecture (i.e., LSTM ct+pt ) performs better, that is, 72% F1 between the sarcastic and non-sarcastic category (average), which is around 4 percentage points better than the multiple LSTMs (i.e., LSTM ct +LSTM pt ). | contrasting |
train_7256 | (2014) showed that by providing additional conversation context, humans could identify sarcastic utterances that they were unable to identify without the context. | it will be useful to understand whether a specific part of the conversation context triggers the sarcastic reply. | contrasting |
train_7257 | Also, the attention model selects (i.e., second highest weight) sentence S2 from the P TURN ("he died because of it"), which also shows that the model captures opposite sentiment between the conversation context and the sarcastic post. | from Figure 8, we notice that some of the Turkers choose the third sentence S3 ("sure russia fuels the conflict, but he didnt have to go there") in addition to sentence S1 from the context P TURN. | contrasting |
train_7258 | For this task, the agreement between the attention weights of the models and humans (using majority voting) is lower than for the previous task. | the IAA between Turkers is also just moderate (α between 0.66 and 0.72), which shows that this is inherently a difficult task. | contrasting |
train_7259 | We show that even if the training data in Reddit is ten times larger, it did not make much impact in our experiments. | the Reddit corpus consists of several subreddits, so it might be interesting in the future to experiment with training data from a particular genre of subreddit (e.g., political forums). | contrasting |
train_7260 | The typology provides valuable insights into the linguistic realization of irony that could improve its automatic detection (e.g., the correlation between irony markers and irony activation types). | given the complexity of identifying such pragmatic devices as demonstrated by the inter-annotator agreement study, it is not clear to which extent it would be computationally feasible to detect the irony categories they propose. | contrasting |
train_7261 | As shown in Table 11, keeping only content words discards pronouns, determiners, and so forth, and makes the targets more likely to yield many tweets. | it also discards elements that are crucial for the semantics of a target, such as numbers and figures. | contrasting |
train_7262 | We can conclude that applying sentiment analysis to the tweets is a viable method to define the prototypical sentiment related to the situations (yielding an accuracy of up to 72.20%). | our targets being very specific and restricted by the limited search space when using the Twitter Search API, we were only able to collect tweets for fewer than half of the targets. | contrasting |
train_7263 | As can be inferred from Figure 7, collecting more tweets seems to have a moderate effect on the overall sentiment analysis performance (91% vs. 94%). | the increase seems to stagnate at 750 tweets and the scores even decline as the number of tweets further increases. | contrasting |
train_7264 | First, it should be noted that a contrast feature (based on explicit polarity words) was already included as part of the sentiment lexicon features (see Section 3). | the feature is not included in the final irony classifier, as the experiments revealed that combining lexical, semantic, and syntactic features without sentiment features works best. | contrasting |
train_7265 | Furthermore, while the annotation guidelines distinguish between different irony types, the present research approaches irony detection as a binary classification task and hence provides insights into the feasibility of irony detection in general. | fine-grained irony classification might be worthwhile in the future, to be able to detect specifically ironic tweets in which a polarity inversion takes place. | contrasting |
train_7266 | Networks with ReLU have a lower run time and tend to show better convergence (Talathi and Vartak 2015). | reLU has the disadvantage of dying cells (the dying reLU problem), but this can be overcome by using a variant called Leaky reLU. | contrasting |
train_7267 | The incorrect decisions are underlined and marked with red color. | unlike synchronous conversations (e.g., meeting, phone), modeling conversational dependencies between sentences in an asynchronous conversation is challenging, especially when the thread structure (e.g., "reply-to" links between comments) is missing, which is also our case. | contrasting |
train_7268 | 2006;Ravi and Kim 2007;Bhatia, Biyani, and Mitra 2014) tackle the task at the comment level, and use task-specific tagsets. | in this work we are interested in identifying speech acts at the sentence level, and also using a standard tagset like the ones defined in SWBD or MRDA. | contrasting |
train_7269 | In this case, because the out-of-domain labeled data set (MRDA) is much larger, it overwhelms the model, inducing features that are not relevant for the task in the target domain. | when we provide the model with some labeled in-domain examples Confusion matrices for (a) MLP conv-glove and (b) B-LSTM conv-glove on the test sets of QC3, TA, and BC3. | contrasting |
train_7270 | One interesting future work would be to learn the underlying conversational structure automatically. | we believe that in order to learn an effective model, this would require more labeled data. | contrasting |
train_7271 | 11 These values could also be due to the collection of fewer answers per compound for some of the data sets. | there is no clear tendency in the variation of the standard deviation of the answers and the number of participants n. The values of σ are quite homogeneous, ranging from 1.05 for EN-comp 90 (head) to 1.27 for EN-comp Ext (head). | contrasting |
train_7272 | Table 4 reports the best configurations for the EN-comp, FR-comp, and PT-comp data sets. | to determine whether the Spearman scores obtained are robust and generalizable, in this section we report evaluation using cross-validation. | contrasting |
train_7273 | In EN-comp, all differences between the two highest Spearman values for each DSM were significant, according to Wilcoxon's signrank test, except for PPMI-thresh, whereas in FR-comp and PT-comp they were significant only for PPMI-TopK and lexvec. | note that the top two results are often both obtained on representations based on lemmas. | contrasting |
train_7274 | As in Figure 9, Figure 10 shows a random distribution of improvements. | the outliers have the opposite pattern, indicating that large reclassifications due to pc geom tend to favor idiomatic instead of compositional compounds. | contrasting |
train_7275 | We also calculated the correlation between PMI and the human-prediction difference (diff), to determine if DSMs build less precise vectors for less conventionalized compounds (approximated as those with lower PMI). | no statistically significant correlation was found for most DSMs (Table 10, column ρ[diff, PMI]). | contrasting |
train_7276 | For example, given the declarative sentence NVIDIA was founded by Jen-Hsun Huang and Chris Malachowsky, the distant supervision approach creates the utterance NVIDIA was founded by Jen-Hsun Huang and blank and the corresponding denotation Chris Malachowsky. | the actual question users may ask is Who founded NVIDIA together with Jen-Hsun Huang. | contrasting |
train_7277 | Different from the surface syntax, GR graphs are not constrained to trees. | the tree structure is a fundamental consideration of the design for the majority of existing parsing algorithms. | contrasting |
train_7278 | On the one hand, each subgraph is simple enough to allow efficient construction. | the combination of all subgraphs enables the whole target GR structure to be produced. | contrasting |
train_7279 | However, in the particular neural parsing architecture employed here, beam search performs significantly worse than the dynamic oracle strategy. | do note that beam search and structured learning may be very helpful for neural parsing models of other architectures (Weiss et al. | contrasting |
train_7280 | Second, it reflects a concrete technique by which regular sound Greek z u g o n th u g a (ter-) -n u -(os) e r u th (rós) Regular sound correspondences across four Indo-European languages, illustrated with help of alignments along the lines of Anttila (1972, page 246). | to the original illustration, lost sounds are displayed with help of the dash "-" as a gap symbol, while missing words (where no reflex in Gothic or Latin could be found) are represented by the "∅" symbol. | contrasting |
train_7281 | If we had reflexes for all languages under investigation in all cognate sets, the compatibility would not be a problem, since we could simply group all identical sites with each other, and the task could be considered as solved. | since it is rather the exception than the norm to have all reflexes for all cognate sets in all languages, we will always find possible alternative groupings for the alignment sites. | contrasting |
train_7282 | What may come as a surprise is that the reduction of the data by 25% and 50% does not seem to influence the accuracy of prediction in all data sets. | in the Chinese and the Polynesian data sets, we find even slightly higher accuracy scores for the larger data reduction. | contrasting |
train_7283 | This is because semantics of longer contexts is more difficult to capture than that of shorter contexts. | the performance of all models improved when the length of contexts increases from (5, 10] to (10, ). | contrasting |
train_7284 | The current method requires a huge amount of training data (i.e., context-response pairs) to learn a matching model. | it is too expensive to obtain large-scale (e.g., millions of) human labeled pairs in practice. | contrasting |
train_7285 | Fourth, a P&P parameter typically controls a very small set of features. | if our linguistic parameter is turned on, it more or less modifies all the feature generation probabilities. | contrasting |
train_7286 | In contrast, if our linguistic parameter is turned on, it more or less modifies all the feature generation probabilities. | we can expect a small number of linguistic parameters to dominate the probabilities because weights are drawn from a heavy-tailed distribution, as we describe in Section 4.1. | contrasting |
train_7287 | We decided not to follow the TimeML classification because aspectual types and syntax do not have a primary importance for historians in the interpretation of texts. | the types and subtypes defined in ACE, ERE, and Event Nugget are too limited and do not allow us to identify events comprehensively. | contrasting |
train_7288 | Moreover, adjectives are underrepresented in WordNet supersenses with only three possible classes: adj.ppl, adj.all, and adj.pert. | adjectives have an important role in our annotation in the context of copular constructions. | contrasting |
train_7289 | The historical nature of the texts and the diversity of topics covered by the news make them particularly interesting for annotation. | travel narratives are not much explored in computational linguistics. | contrasting |
train_7290 | The best F1 (29.27%) is given by a context of [±1]. | as already noted with the experiments on features, precision and recall are not balanced, showing a difference of 11.73 points. | contrasting |
train_7291 | In the experiments reported in Reimers and Gurevych (2017a), discarding character embeddings is never the best option. | in our case the highest precision, recall, and F1 are achieved without using them. | contrasting |
train_7292 | As for GloVe, there is not much difference between the two dimensions (300 and 100). | the overall results are almost 3 points lower for event detection and 4 points lower for classification compared to the model employing the Komninos and Manandhar embeddings. | contrasting |
train_7293 | Compared to the results of inter-annotator agreement, we notice that three of the classes with higher F1 had also a perfect agreement between human annotators: This means that COMMUNICATION, PHYSICAL SENSATIONS, and LIFE-HEALTH are the less ambiguous classes to be identified. | other classes with perfect IAA have very low scores or even an F1 equal to zero: this is the case for FOOD-FARMING, EDUCATION, and ECONOMY. | contrasting |
train_7294 | As a result in the English-to-Japanese translation task, the proposed model obtained better RIBES and BLEU scores than the sequence-to-sequence NMT model both with and without reversed input with the same number of parameters. | the same significant improvement is observed only over the sequence-to-sequence NMT model with reversed inputs in the Chinese-to-Japanese translation task. | contrasting |
train_7295 | Inflecting number names incorrectly in, say, Russian does lead to a lack of naturalness, but the result would be usable in most cases. | producing number names that are not value-preserving renders the result worse than unusable, since the user would be grossly misinformed. | contrasting |
train_7296 | To a first approximation, one can handle all of these cases using hand-written grammars-as in Sproat (1997) and Ebden and Sproat (2014). | different aspects of the problem are better treated using different approaches. | contrasting |
train_7297 | Note that the amount of training dataabout 10 million tokens-is perhaps an order of magnitude less data than is commonly used for translation between high-resource languages. | our task is significantly easier than real translation. | contrasting |
train_7298 | In the previous sections, we have assumed that the input sequence is pre-segmented into tokens. | in real-world applications the normalization model needs to be fed with segmented output produced by a segmenter. | contrasting |
train_7299 | The grammars used in a largely hand-crafted text normalization system tend to be quite complex because they need to produce the contextually appropriate rendition of a particular token. | covering grammars-henceforth CGs-are lightweight grammars, which enumerate the reasonable possible verbalizations for an input, rather than trying to specify the correct version for a specific instance. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.