id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_16300 | Formally, The final task-specific embedding for subtask C is formed as One simple way to exploit the interdependencies between the subtask-specific embeddings ( is to precompute the predictions for some subtasks (A and B), and then to use the predictions as features for the other subtask (C). | as shown later in Section 6, such a pipeline approach propagates errors from one subtask to the subsequent ones. | contrasting |
train_16301 | Differences among the question styles: The biggest advantage to the answer extraction style is its ease in generating questions, which enables us to produce large-scale datasets. | a disadvantage to this style is that it rarely demands meta/whole and math/logic skills, which can require answers not contained in the context. | contrasting |
train_16302 | Much progress has been made in reasoning-based MRC-QA on the bAbI dataset (Weston et al., 2016), which contains questions that require the combination of multiple disjoint pieces of evidence in the context. | due to its synthetic nature, bAbI evidences have smaller lexicons and simpler passage structures when compared to humangenerated text. | contrasting |
train_16303 | More recent datasets such as QAngaroo (Welbl et al., 2018) have prompted a strong focus on multi-hop reasoning in very long texts. | qAngaroo is an extractive dataset where answers are guaranteed to be spans within the context; hence, this is more focused on fact finding and linking, and does not require models to synthesize and generate new information. | contrasting |
train_16304 | A similar trend holds for GN-LF. | gN-EF with text improves over the KB-only approach in all settings. | contrasting |
train_16305 | Recently, there has been a surge of interest in reading comprehension-based (RC) question answering (QA). | current approaches suffer from an impractical assumption that every question has a valid answer in the associated passage. | contrasting |
train_16306 | Increasing Nil F1 scores also help to improve the overall F1 scores. | the overall F1 score degrades with increasing length of the associated passage. | contrasting |
train_16307 | Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures. | only the top layers of encoder and decoder are leveraged in the subsequent process, which misses the opportunity to exploit the useful information embedded in other layers. | contrasting |
train_16308 | In addition to two sub-layers in each decoder layer, the decoder inserts a third sublayer D l d to perform attention over the output of the encoder stack H L e : where Multi-layer network can be considered as a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence . | one potential problem about the vanilla Transformer, as shown in Figure 1a, is that both the encoder and decoder stack layers in sequence and only utilize the information in the top layer. | contrasting |
train_16309 | They conclude that CNNs are better than RNNs for sequence modeling. | their CNN models perform much worse than the state-of-art LSTM models on some sequence modeling tasks, as they themselves state in the appendix. | contrasting |
train_16310 | In ATR, the role of q t is similar to that ofh t in GRU (see Equation 4). | we completely wipe out the recurrent non-linearity Figure 2: Visualization of the difference between σ(x + y) (the input gate) and σ(x − y) (the forget gate). | contrasting |
train_16311 | This is reasonable since the Transformer can be trained in full parallelization. | deepRNNSearch is the slowest system. | contrasting |
train_16312 | (2015) (and many others) only used the top-30k frequent words as target vocabulary, and replaced others with UNK. | the final normalization operation still brought high computation complexity for forward calculations. | contrasting |
train_16313 | Translating characters instead of word fragments avoids these problems, and gives the system access to all available information about source and target sequences. | it presents significant modeling and computational challenges. | contrasting |
train_16314 | Strictly speaking, the story generation task requires systems to generate a story from scratch without any external materials. | for simplification, many existing story generation models rely on their given materials, such as short text descriptions (Harrison et al., 2017;Jain et al., 2017), visual images (Charles et al., 2001;Huang et al., 2016), and so on. | contrasting |
train_16315 | A good skeleton is expected to contain all key information and ignore other information. | the skeletons that contain too much detailed information or lack necessary information are considered as bad skeletons and should be punished. | contrasting |
train_16316 | When the input is short and the model often sees the input, the generated story tends to have high coherence. | when the length of input increases and the model is not familiar with the input, the coherence goes down. | contrasting |
train_16317 | Experimental results show that our model significantly improves the quality of generated stories, especially in coherence. | even with the best human evaluation results, the error analysis shows that there are still many challenges in narrative story generation, which we would like to explore in the future. | contrasting |
train_16318 | It has also been applied in dialogue generation (Serban et al., 2017;. | all the above methods stick to the MLE objective function and do not optimize with respect to the mutual information. | contrasting |
train_16319 | Coherence models that purely rely on the recurrent neural networks process words sequentially within a text. | in such models, long-distance dependencies between words cannot be captured effectively due to the limits of the memorization capability of recurrent networks. | contrasting |
train_16320 | However, Content Model requires manual feature engineering that costs great human efforts. | the self-attention mechanism used in ATTOrder-Net directly captures the global dependences for the whole text while requiring no linguistic knowledge anymore and enables ATTOrderNet to further improve tau score to 0.92 on the same dataset. | contrasting |
train_16321 | We present the weight distribution in four heads as an instance. | it is interesting to see that all of them showing significant higher attention weights on the true second sentence "text use changes ..." than other sentences in the text. | contrasting |
train_16322 | data (Dyen et al., 1992;Greenhill et al., 2008). | 3 if a tree is given a priori, phylogenetic models can also be used to estimate the parameters of a TRM, which controls how languages change their feature values over time. | contrasting |
train_16323 | A popular approach (Greenhill et al., 2010;Dunn et al., 2011) is to construct a time-tree with absolute (calendar) dates, using binary-coded lexical cognate data, and then to fit each trait of interest independently on the time-tree. | 4 cognate data are available only for a handful of language families such as Indo-European, Austronesian and Niger-Congo (or its mammoth Bantu branch). | contrasting |
train_16324 | We address these problems by including geographic information via retrofitting (Faruqui et al., 2015;Hovy and Fornaciari, 2018): we use administrative region boundaries to modify the city embeddings, and evaluate the resulting vectors in a clustering approach to discover larger dialect regions. | to most dialectometric approaches (Nerbonne et al., 1999;Prokić and Nerbonne, 2008), and in line with common NLP practice (Doyle, 2014;Grieve, 2016;Huang et al., 2016;Rahimi et al., 2017a), we also evaluate the clustered dialect areas quantitatively. | contrasting |
train_16325 | This heavily contributes to vulgar word volatility across different functions and higher ambiguity in context. | this type of usage can allow computational approaches that model the immediate context around a word to generalize across words to functions. | contrasting |
train_16326 | This highlights both the challenges in modeling vulgar word functions and the opportunity of using the function to improve practical applications. | table 5 shows the vulgar words which are most likely to be used with each of the six functions. | contrasting |
train_16327 | Activation functions have been characterized by a variety of properties deemed important for successful learning, such as ones relating to their derivatives, monotonicity, and whether their range is finite or not. | in recent work, Ramachandran et al. | contrasting |
train_16328 | Initially, we do not consider the more popular LSTMs here for reasons indicated below. | we include a comparison after discussing the RNN performance. | contrasting |
train_16329 | Note that as LSTM-Shuttle proceeds not only forwards but also backwards, the output shuttle size is 2K + 1: K forward, K backward, and 1 for stopping. | to LSTM-Jump (Yu et al., 2017), when going back, our shuttle step is counted before reading sequentially. | contrasting |
train_16330 | We compute θ R via backpropagation directly by minimizing J 1 (θ R ), the cross entropy loss, which is differentiable over θ R and is the target objective function of the classification task. | this does not work for θ U . | contrasting |
train_16331 | For bi-directional LSTM-Jump, since it applies LSTM-Jump twice, it predicts better than the original. | we doubt whether it is worth sacrificing so much efficiency for such a small increase in prediction accuracy (+0.2%). | contrasting |
train_16332 | A smaller N means LSTM-Shuttle shuttles less often, so the model tends to read through as much as possible, making for a lower backward ratio. | lSTM-Shuttle can shuttle many times so it is willing to go back to correct misunderstandings. | contrasting |
train_16333 | Some fields such as Invoice Number are relatively easy to detect for a model that operates on serialized text, as discriminative keywords are commonly preceding the words to be extracted. | more complex extraction tasks (e.g. | contrasting |
train_16334 | The complexity of this step is O(L L is the sequence length and B is the batch size. | the complexity of LSTM is O(L because of the hidden-to-hidden multiplications (e.g. | contrasting |
train_16335 | As expected, the performance of the stacked model declines when increasing the stack depth. | the performance of CSRAN improves by adding additional layers. | contrasting |
train_16336 | As a result, residual strategies have often been employed Srivastava et al., 2015;Huang et al., 2017). | to the best of our knowledge, this work presents a new way of residual connections, leveraging on the fact that pairwise formulation of the text matching task. | contrasting |
train_16337 | 2 In this way, the model initially learns to use the latent code but is then regularized towards the prior as training progresses. | this trick is not sufficient to avert KL collapse in all scenarios, particularly 2 Reweighting the KL term is also used in methods like β-VAE (Higgins et al., 2017) and InfoVAE (Zhao et al., 2017b). | contrasting |
train_16338 | 6 Results Experimental results 7 are shown in Table 4. | with NVRNN, the NVDM fully relies on the power of latent code to predict the word distribution, so we never observe a KL collapse, yet vMF still achieves better performance than Gaussian. | contrasting |
train_16339 | Brittleness of Learning κ Throughout this work, we have treated κ as a fixed parameter. | we can treat κ in the same way as σ in the Gaussian case and learn it on a per-instance basis. | contrasting |
train_16340 | The most widely used method is to employ an encoder-decoder architecture with recurrent neural networks (RNN) to predict the original input sentence or surrounding sentences given an input sentence Ba et al., 2016;Hill et al., 2016;Gan et al., 2017). | the RNN becomes time consuming when the sequence is long. | contrasting |
train_16341 | The max pooling takes the maximum value over the sequence, which tries to capture the most salient property while filtering out less informative local values. | the mean pooling does not make sharp choices on which part of the sequence is more important than others, and so it captures general information while not focusing too much on specific features. | contrasting |
train_16342 | Surprisingly, these two methods often achieve similar or better performance than PV-DBOW and PV-DM, which may be because of the high-quality pre-trained word embeddings. | doc2VecC achieves much better testing accuracy than these previous methods on two datasets (20NEWS, and RECIPE_L). | contrasting |
train_16343 | The former primarily generate general purpose and domain independent embeddings of word sequences (Socher et al., 2011;Kiros et al., 2015;Arora et al., 2017); many unsupervised training research efforts have focused on either training an auto-encoder to learn the latent structure of a sentence (Socher et al., 2013), a paragraph, or document (Li et al., 2015); or generalizing Word2Vec models to predict words in a paragraph (Le and Mikolov, 2014;Chen, 2017) or in neighboring sentences (Kiros et al., 2015). | some important information could be lost in the resulting document representation without considering the word order. | contrasting |
train_16344 | This "domino-toppling" technique could have in principle a quadratic complexity in the number of crosslingual clusters. | we have verified that in practice it converges very fast, and in our evaluation dataset only 1% of the crosslingual updates result in topples. | contrasting |
train_16345 | Multi-task learning in text classification leverages implicit correlations among related tasks to extract common features and yield performance gains. | a large body of previous work treats labels of each task as independent and meaningless one-hot vectors, which cause a loss of potential label information. | contrasting |
train_16346 | where ⊕ denotes vector concatenation. | we apply the above equations to implement L I with hidden size m. it is inappropriate to apply a BiLSTM for L L , as most labels contain only one or two words. | contrasting |
train_16347 | Zero Update achieves competitive performances in Case 1 (90.9 for IMDB) and Case 2 (86.7 for Kitchen), as tasks from these two cases all belong to sentiment datasets of different cardinalities or domains that contain rich semantic correlations with each other. | the result for IMDB in Case 3 is only 74.2, as sentiment shares less relevance with topic and question type, thus leading to poor transferring performances. | contrasting |
train_16348 | With the development of Deep Learning, neural methods are applied to this task and achieved improvements (Zhang and Zhou, 2006;Nam et al., 2013;Benites and Sapozhnikova, 2015). | these methods cannot model the internal correlations among labels. | contrasting |
train_16349 | We can produce low-dimensional variables using, for example, text classifiers, and then run our causal analysis. | this straightforward integration belies several potential issues. | contrasting |
train_16350 | There is a conceptually related line of work in the NLP community on inferring causal relationships expressed in text (Girju, 2003;Kaplan and Berry-Rogghe, 1991). | our work is fundamentally different. | contrasting |
train_16351 | Currently, state-of-the-art approaches to disfluency detection depend heavily on hand-crafted pattern match features, specifically designed to find such "rough copies" (Zayats et al., 2016;Jamshid Lou and Johnson, 2017). | to many other sequence tagging tasks (Plank et al., 2016;Yu et al., 2017), "vanilla" convolutional neural networks (CNNs) and long shortterm memory (LSTM) models operating only on words or characters are surprisingly poor at disfluency detection (Zayats et al., 2016). | contrasting |
train_16352 | For the ACNN, we considered a range of possible binary functions f (u, v) to compare the input vector u = x t with the input vector v = x t in the auto-correlational layer. | in initial experiments we found that the Hadamard or elementwise product (i.e. | contrasting |
train_16353 | In many NLP applications including language modeling, the input vector is a dense word embedding which is shared across all contexts for a given word in a dataset. | the context vector is highly contextualized by the current sequence. | contrasting |
train_16354 | With grouped linear and pyramidal transformations, PRUs learn rich representations at very high dimensional space while learning fewer parameters. | lSTMs overfit to the training data at such high dimensions and learn 1.4× to 1.8× more parameters than PRUs. | contrasting |
train_16355 | (2015) empirically concludes that syntax tree-based sentence modeling are effective for tasks requiring relative long-term context features. | some works propose to abandon the syntax tree but to adopt the latent tree for sentence modeling (Choi et al., 2018;Maillard et al., 2017;Williams et al., 2018). | contrasting |
train_16356 | Despiting the linear structured left-branching and rightbranching tree encoders, we find that, tree-based encoders generally perform better than Bi-LSTMs on tasks of sentence relation and sentence generation, which may require relatively more long term context features for obtaining better performances. | the improvements of tree encoders on NLI and Para are relatively small, which may be caused by that sentences of the two tasks are shorter than others, and the tree encoder does not get enough advantages to capture long-term context in short sentences. | contrasting |
train_16357 | The latent tree is really competitive on some tasks, as its structure is directly tuned by the corresponding tasks. | it only beats the binary balanced tree by very small margins on NLI and ARP. | contrasting |
train_16358 | To generate labelspecific topics, several supervised topic models which adopt likelihood-driven objective functions have been proposed. | it is hard for them to get a precise estimation on both topic discovery and supervised learning. | contrasting |
train_16359 | This is because the supervision of labels is incorporated into topic modeling for SLTM. | the mapping of topics to labels is unconstrained for most existing supervised topic models, which renders many coherent topics being generated outside labels. | contrasting |
train_16360 | The most relevant work to SLTM is the supervised Neural Topic Model (sNTM) for both classification and regression tasks (Cao et al., 2015), which constructed two hidden layers to generate the ngram topic and document-topic representations. | different from our SLTM using bagof-words methods, sNTM adopted fixed embeddings trained on external resources (Mikolov et al., 2013). | contrasting |
train_16361 | On one hand, a label-specific word embedding is introduced for predicting labels in SLTM according to Equation 10. | other supervised topic models for both categorical and real-valued label prediction tasks infer labels for unlabeled documents by topic distributions directly, in which, topic distributions of unlabeled documents are learned without the supervision of labels. | contrasting |
train_16362 | (2009) showed that the Dirichlet prior is important to producing interpretable topics. | it is hard to apply the Dirichlet prior to AEVB directly. | contrasting |
train_16363 | BTM models the word co-occurrence explicitly by directly counting the word-pairs in a text window. | using one-hot encoding for biterms may lose the transitive co-relations. | contrasting |
train_16364 | So aggregating documents in AVITM can not enhance the input feature. | our model uses GCN to capture the transitivity of biterms and can benefit from the sampling strategy a lot. | contrasting |
train_16365 | 1 Typical domain adaptation methods are designed to transfer supervision from a single source domain. | in many practical applications, we have access to multiple sources. | contrasting |
train_16366 | 2 To derive α, we first define a point-to-set Mahalanobis distance metric between an example x and a set S: where µ S is the mean encoding of S. In its original form, the matrix M S played the role of the inverse covariance matrix. | computing the inverse of the covariance matrix is both time consuming and numerically unstable in practice. | contrasting |
train_16367 | Consequently, if x is either far away from S (i.e., x is not in the manifold of S) or near the classification boundary, we will get a small e(x, S) indicating a low confidence to the corresponding prediction. | if x is much closer to a specific category of S than other categories, the classifier will get a higher confidence. | contrasting |
train_16368 | As shown in Table 6, the increase of the Twitter data does not benefit the unified multi-source model (uni-MS-A), and even amplifies negative transfer for the Answers and Reviews domains. | the performance of our MoE (MoE-A) model stays stable, consistently increasing with more Twitter, showing robustness in handling negative transfer. | contrasting |
train_16369 | The number of correctlylabeled examples is tallied up and reported. | we hypothesize that it may be worthwhile to use difficulty when evaluating DNNs. | contrasting |
train_16370 | If a model does well on hard examples and poor on easy examples, then can we say that it has really learned anything? | if a model does well on easy items, because a dataset is all easy, have we really "solved" anything? | contrasting |
train_16371 | The traditional assumption that the test data is drawn from the same distribution as the training data, makes it difficult to understand how a model will perform in settings where that assumption does not hold. | if the difficulty of test set data is known, we can better understand what kind of examples a given model performs well on, and specific instances where a model underperforms (e.g. | contrasting |
train_16372 | The key parameter in the FOFE method is the forgetting factor, which is responsible for determining the degree of sensitivity of the encoding with respect to the past context. | the choice of a good value for the forgetting factor could be tricky since both small and large forgetting factors are offering different benefits. | contrasting |
train_16373 | In the original FOFE with just a single forgetting factor, we would have to determine the best trade-off between these two benefits. | the dual-FOFE does not face such issues since it is composed of two FOFE codes: the half of the dual-FOFE code using a smaller forgetting factor is solely optimized and responsible for representing the positional information of all words in the sequence; meanwhile the other half of the dual-FOFE code using a larger forgetting factor is optimized and responsible for maintaining the long-term dependency of past context. | contrasting |
train_16374 | Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). | the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). | contrasting |
train_16375 | The works of (Yang and Cardie, 2013) and (Li et al., 2010) are similar to (Jin et al., 2009). | these works only take pre-defined features into account and can not find new features. | contrasting |
train_16376 | Yet to date, textbased empathy prediction has the following major limitations: It underestimates the psychological complexity of the phenomenon, adheres to a weak notion of ground truth where empathic states are ascribed by third parties, and lacks a shared corpus. | this contribution presents the first publicly available gold standard for empathy prediction. | contrasting |
train_16377 | The above contributions, in addition to emoji similarity datasets (Barbieri et al., 2016;Wijeratne et al., 2017) or emoji sentiment lexicons (Novak et al., 2015;Wijeratne et al., 2016;Kimura and Katsurai, 2017;Rodrigues et al., 2018), have paved the way for better understanding the semantics of emojis. | our understanding of what exactly the neural models for emoji prediction are capturing is currently very limited. | contrasting |
train_16378 | Attentive architectures in NLP, in fact, have recently received substantial interest, mostly for sequenceto-sequence models (which are useful for machine translation, summarization or language modeling), and a myriad of modifications have been proposed, including additive (Bahdanau et al., 2015), multiplicative (Luong et al., 2015) or self (Lin et al., 2017) attention mechanisms. | standard attention mechanisms only tell us which text fragments are considered impor- Figure 1: A classic attention network (top), and our attentive label-wise network (bottom), with a specific attention module for each label. | contrasting |
train_16379 | This is often done via a standard RNN decoder that operates on a linearized target tree structure. | it is an open question of what specific linguistic formalism, if any, is the best structural representation for NMT. | contrasting |
train_16380 | ther through syntactic encoders Eriguchi et al., 2016), multi-task learning objectives (Chen et al., 2017;Eriguchi et al., 2017), or direct addition of syntactic tokens to the target sequence (Nadejde et al., 2017;Aharoni and Goldberg, 2017). | these syntax-aware models only employ the standard decoding process of seq2seq models, i.e. | contrasting |
train_16381 | Ideally, this distribution will be focused around zero, indicating that the MT system is generating translations about the same length as the reference. | the distribution of TrDec-con is more spread out than TrDec-binary, which indicates that it is more difficult for TrDec-con to generate sentences with appropriate target length. | contrasting |
train_16382 | Currently, NMT models are usually trained with the word-level loss (i.e., cross-entropy) under the teacher forcing algorithm (Williams and Zipser, *Corresponding Author 1989), which forces the model to generate translation strictly matching the ground-truth at the word level. | in practice it is impossible to generate translation totally the same as ground truth. | contrasting |
train_16383 | Greedy search together with the word-level loss is very similar with the scheduled sampling(SS). | ss is inconsistent with the word-level loss since the word-level loss requires strict alignment between hypothesis and reference, which can only be accomplished by the teacher forcing algorithm. | contrasting |
train_16384 | compare crowdsourced to "expert" ratings on machine translations from WMT 2012, concluding that, with proper quality control, "machine translation systems can indeed be evaluated by the crowd alone." | it is unclear whether this finding carries over to translations produced by NMT systems where, due to increased fluency, errors are more difficult to identify (Castilho et al., 2017a), and concurrent work by Toral et al. | contrasting |
train_16385 | For the following systems we observe a very small increase in APT score for each of the two weight settings we consider, when alignment heuristics are applied: UU-HARDMEIER (+0.8), ITS2 (+0.8), BASELINE (+0.8), YANDEX (+0.8), and NYU (+0.4). | these small improvements are not sufficient to affect the system rankings. | contrasting |
train_16386 | Some of these issues could be addressed by incorporating knowledge of pronoun function in the source language, of pronoun antecedents, and of the wider context of the translation surrounding the pronoun. | whilst we might be able to derive language-specific rules for some scenarios, it would be difficult to come up with more general or language-independent rules. | contrasting |
train_16387 | From the results, we observe that integrating few-shot learning methods into CNN significantly outperforms CNN/PCNN with finetune or kNN, which means adapting fewshot learning methods for RC is promising. | there are still huge gaps between their performance and humans', which means our dataset is a challenging testbed for both relation classification and few-shot learning. | contrasting |
train_16388 | We thereby consider ALT as the baseline for multitask learning in our work. | we argue that this baseline is not effective enough to transfer the knowledge from the WSD dataset to ED in our case. | contrasting |
train_16389 | One the one hand, this representation matching schema helps the two models to communicate to each other so the knowledge from one model can be passed to the other one. | the use of two separate models leaves a flexibility for the models to induce the task-specific structures. | contrasting |
train_16390 | This difficulty makes the co-reference resolution model either prediction a wrong antecedent mention, or cannot find any co-reference. | with ASL, the model learns the semantics of pronouns with an attention to words in other sentences. | contrasting |
train_16391 | Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. | embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect social constructs. | contrasting |
train_16392 | The standard cross-entropy loss penalizes the model whenever it fails to produce the exact word from the ground truth data used for training. | in many NLP tasks that deal with generating text from semantic representation, recovering the exact word is not necessarily optimal, and often generating a near-synonym or just a semantically close word is nearly as good or even better from the point of view of model performance. | contrasting |
train_16393 | These works replace mean estimates of embeddings with Gaussian distributions, similar to our proposal here. | they arrive at this differently; Vilnis and McCallum (2017) from the energy-based learning (LeCun et al., 2006), and Brazinskas et al. | contrasting |
train_16394 | So too, the largest DBE drift ("uk") is insignificant once you take into account the covariance structure. | cosine Similarity vs. KL Divergence to cosine distance, our proposed method allows computation of the KLD between two vectors that takes into account their covariance. | contrasting |
train_16395 | Then, a morpheme dictionary of Sanskrit words is used with other heuristics to remove infeasible word split combinations. | none of the approaches address the fundamental problem of identifying the location of the split before applying the rules, which will significantly reduce the number of rules that can be applied, hence resulting in more accurate splits. | contrasting |
train_16396 | It is to be noted that B-RNN-A is the same as DD-RNN without the location decoder. | the accuracy of DD-RNN is 14.7% more than that the B-RNN-A and consistently outperforms B-RNN-A on almost all word lengths ( Figure 5). | contrasting |
train_16397 | Presence of such compound words will increase the vocabulary size exponentially and hinder the translation process. | as a pre-processing step, if all the compound words are split before training a translation model, the number of unique words in the vocabulary reduces which will ease the learning process. | contrasting |
train_16398 | For generative approaches, typical statistical models includes Hidden Markov Model (HMM) (Chen et al., 2014), Hierarchical Dirichlet Process (HDP) (Goldwater et al., 2009) and Nested Pitman-Yor Process (NPY) (Mochihashi et al., 2009). | none of them can be easily extended into a neural model. | contrasting |
train_16399 | Figure 1 illustrates how SLMs work with a candidate segmentation. | in unsupervised scheme, the given sentences are not segmented. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.