id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_2900 | Spearman's correlation (Hogg and Craig, 1994) and Kendall's tau (Kendall, 1938) have been widely used in many regression problems in NLP (Albrecht and Hwa, 2007;Yogatama et al., 2011;, and here we use them to measure the quality of predicted valuesŷ by comparing to the vector of ground truth y. | to Pearson's correlation, Spearman's correlation has no assumptions on the relationship of the two measured variables. | contrasting |
train_2901 | The benefit of a semiparametric model is that here we are not interested in performing completely nonparametric estimations, where the infinite dimensional parameters might bring intractability. | by considering the semiparametric case, we not only obtain some expressiveness from the nonparametric models, but also reduce the complexity of the task: we are only interested in the finite-dimensional components Σ in the Gaussian copula with O(n log n) complexity, which is not as computationally difficult as the completely nonparametric cases. | contrasting |
train_2902 | Topic models, an unsupervised technique for inferring translation domains improve machine translation quality. | previous work uses only the source language and completely ignores the target language, which can disambiguate domains. | contrasting |
train_2903 | As we review in Section 2, topic models are a promising solution for automatically discovering domains in machine translation corpora. | past work either relies solely on monolingual source-side models (Eidelman et al., 2012;Hasler et al., 2012;Su et al., 2012), or limited modeling of the target side (Xiao et al., 2012). | contrasting |
train_2904 | (2011) combine these approaches by directly optimizing genre and collection features by computing separate translation tables for each domain. | these approaches treat domains as hand-labeled, constant, and known a priori. | contrasting |
train_2905 | These models take advantage of word or document alignment information and infer more robust topics from the aligned dataset. | lexical information can induce topics from multilingual corpora. | contrasting |
train_2906 | Document-level Alignments Lexical resources connect languages and help guide the topics. | these resources are sometimes brittle and may not cover the whole vocabulary. | contrasting |
train_2907 | Generative Process As in LDA, each word token is associated with a topic. | tree-based topic models introduce an additional step of selecting a concept in a topic responsible for generating each word token. | contrasting |
train_2908 | With 1.6M NIST training sentences, gibbs takes nearly a week to run 1000 iterations. | the parallelized variational and variational-hybrid approaches, which we implement in MapReduce (Dean and Ghemawat, 2004;Wolfe et al., 2008;Zhai et al., 2012), take less than a day to converge. | contrasting |
train_2909 | Each tool is typically trained on hand-annotated data, thus placing SRL at the end of a very highresource NLP pipeline. | richly annotated data such as that provided in parsing treebanks is expensive to produce, and may be tied to specific domains (e.g., newswire). | contrasting |
train_2910 | Some work has also attempted to automatically derive logical meaning representations directly from syntactic CCG parses (Bos, 2005;Lewis and Steedman, 2013). | these approaches to semantics do not ground the text to beliefs in a knowledge base. | contrasting |
train_2911 | The output of their model is a list of hypernyms for a given enity (left panel, Figure 1). | there usually also exists hypernym-hyponym relations among these hypernyms. | contrasting |
train_2912 | Lenci and Benotto (2012) propose another measure focusing on the contexts that hypernyms do not share with their hyponyms. | broader semantics may not always infer broader contexts. | contrasting |
train_2913 | (2013) was unable to scale to long sentences and was only tested on the relatively short sentences in the Microsoft video description corpus used for STS (Agirre et al., 2012). | inference in PSL reduces to a linear programming problem, which is theoretically and practically much more efficient. | contrasting |
train_2914 | For example, for T: "A man is driving" and H: "A man is driving a car", if we use the standard PSL formula for conjunction, the output value is zero because there is no evidence for a car and max(0, X + 0 − 1) = 0 for any truth value 0 ≤ X ≤ 1. | humans find these sentences to be quite similar. | contrasting |
train_2915 | p(x) ∧ q(x) → t() and only one piece of evidence p(C) there are no relevant groundings because there is no evidence for q(C), and therefore, for normal PSL, I(p(C) ∧ q(C)) = 0 which does not affect I(t()). | when using averaging with the same evidence, we need to generate the grounding p(C)∧q(C) because I(p(C)∧q(C)) = 0.5 which does affect I(t()). | contrasting |
train_2916 | One way to solve this problem is to eliminate lazy grounding and generate all possible groundings. | this produces an intractably large network. | contrasting |
train_2917 | For the new nodes or unconnected nodes, we draw an edge with a weight of 1. | when two already connected nodes are added (merged), the weight of their connection is increased by 1. | contrasting |
train_2918 | A word graph, as described above, may contain many sequences connecting start and end. | it is likely that most of the paths are not readable. | contrasting |
train_2919 | Considering this marginal improvement and relatively high results of pure extractive systems, we can infer that the Biased LexRank extracted summaries do not carry much query relevant content. | the significant improvement of our model over the extractive methods demonstrates the success of our approach in presenting the query related content in generated abstracts. | contrasting |
train_2920 | The chat dataset results demonstrate the highest scores: 73% of the sentences generated by our phrasal query abstraction model are grammatically correct and 24% of the generated sentences are almost correct with only one grammatical error, while only 3% of the abstract sentences are grammatically incorrect. | the results varies moving to other datasets. | contrasting |
train_2921 | There is evidently a mismatch between the rules and the test-set; the content selection rules are based on heuristics provided by a L&T Expert rather than by the same pool of lecturers that created the test-set. | the RL is trained to optimise the selected content and not to replicate the existing lecturer summaries, hence there is a difference in accuracy. | contrasting |
train_2922 | In our study, the human ratings correlate well to the average scores achieved by the reward function. | the human ratings do not correlate well to the accuracy scores. | contrasting |
train_2923 | In addition, they serve to establish connectivity for the dependency structure z since commodity can only originate in one location-at the pseudo-token ROOT which has no incoming commodity variables. | in order to enforce these properties on the output dependency structure, this acyclic, connected commodity structure must constrain the activation of the z variables. | contrasting |
train_2924 | For example, consider a typical comment on a YouTube review video about a Motorola Xoom tablet: this guy really puts a negative spin on this , and I 'm not sure why , this seems crazy fast , and I 'm not entirely sure why his pinch to zoom his laggy all the other xoom reviews The comment contains a product name xoom and some negative expressions, thus, a bag-of-words model would derive a negative polarity for this product. | the opinion towards the product is neutral as the negative sentiment is expressed towards the video. | contrasting |
train_2925 | This would strongly bias the FVEC sentiment classifier to assign a positive label to the comment. | the STRUCT model relies on the fact that the negative word, destroy, refers to the PRODUCT (xoom) since they form a verbal phase (VP). | contrasting |
train_2926 | We conjecture that sentiment prediction for AUTO category is largely driven by one-shot phrases and statements where it is hard to improve upon the bag-of-words and sentiment lexicon features. | comments from TABLETS category tend to be more elaborated and well-argumented, thus, benefiting from the expressiveness of the structural representations. | contrasting |
train_2927 | Many existing approaches have a separate, heuristic module for extracting candidate keyphrases prior to keyphrase ranking/extraction. | tomokiyo and Hurst (2003) propose an approach (henceforth LMA) that combines these two steps. | contrasting |
train_2928 | As discussed before, the relationship between two candidates is traditionally established using co-occurrence information. | using cooccurrence windows has its shortcomings. | contrasting |
train_2929 | Most notably, Tratz (2011) achieved a result of 88.4 percent accuracy and Srikumar and Roth (2013) achieved a similar result. | litkowski (2013b) showed that these results did not extend to other corpora, concluding that the FrameNet-based corpus may not have been representative, with a reduction of accuracy to 39.4 percent using a corpus developed by Oxford. | contrasting |
train_2930 | Using PDEP, we found that FrameNet feature values for the governor accounted for 264 of these instances (95 percent), all of which were related to the frame elements Contents or Stuff. | in the TPP corpus, only 3 out of 750 instances were identified for this sense (0.4 percent). | contrasting |
train_2931 | Since dictionary publishers have not previously devoted much effort in analyzing preposition behavior, we believe PDEP may serve an important role, particularly for various NLP applications in which semantic role labeling is important. | pDEp as described in this paper is only in its initial stages. | contrasting |
train_2932 | The main work in bilingual lexicon extraction from comparable corpora is based on the implicit hypothesis that corpora are balanced. | the historical contextbased projection method dedicated to this task is relatively insensitive to the sizes of each part of the comparable corpus. | contrasting |
train_2933 | While prior efforts in NLP have incorporated games for performing annotation and validation (Siorpaes and Hepp, 2008b;Herdagdelen and Baroni, 2012;Poesio et al., 2013), these games have largely been text-based, adding game-like features such as high-scores on top of an existing annotation task. | we introduce two video games with graphical 2D gameplay that is similar to what game players are familiar with. | contrasting |
train_2934 | First, we demonstrate effective video gamebased methods for both validating and extending semantic networks, using two games that operate on complementary sources of information: semantic relations and sense-image mappings. | to previous work, the annotation quality is determined in a fully automatic way. | contrasting |
train_2935 | While the computer can potentially act as a second player, such a simulated player is often limited to using preexisting knowledge or responses, which makes it difficult to validate new types of entities or create novel answers. | we drop this requirement thanks to a new strategy for assigning confidence scores to the annotations based on negative associations. | contrasting |
train_2936 | A further analysis revealed differences in the annotators' thresholds for determining association, with one annotator permitting more abstract relations. | the adjudication process resolved these disputes, resulting in substantial agreement by all annotators on the final gold annotations. | contrasting |
train_2937 | One of the ten questions in a task used an item from N c , resulting in a task mixture of 90% annotation questions and 10% quality-check questions. | we note that both of our video games use data that is 50% annotation, 50% quality-check. | contrasting |
train_2938 | The paid and free versions of TKT had similar numbers of players, while the paid version of Infection attracted nearly twice the players compared to the free version, shown in Table 1, Column 1. | both versions created approximately the same number of annotations, shown in Column 2. | contrasting |
train_2939 | Second, the type of incentive did not change the percentage of items from N that players correctly reject, shown for all players as N -accuracy in Table 1 Column 3 and per-player in Figure 3. | players were much more accurate at rejecting items from N in TKT than in Infection. | contrasting |
train_2940 | The images used by TKT provide concrete examples of a concept, which can be easily compared with the game's current concept; in addition, TKT allows players to inspect items as long as a player prefers. | concept-concept associations require more background knowledge to determine if a relation exists; furthermore, Infection gives players limited time to decide (due to board length) and also contains cognitive distractors (zombies). | contrasting |
train_2941 | 3 For images, crowdsourcing workers have a higher IAA than game players; however, this increased agreement is due to adversarial workers consistently selecting the same, incorrect answer. | both video games contain mechanisms for limiting such behavior. | contrasting |
train_2942 | Our current work is inspired by the shallow analysis-based approach of Yoon and Bhat 2012and operates under the same assumptions of capturing the range and sophistication of grammatical constructions at each score level. | the approaches differ in the way in which a spoken response is assigned to a score group. | contrasting |
train_2943 | Indeed there is work in the literature that shows that various topic models, latent or otherwise, can be useful for improving lan-guage model perplexity and word error rate (Khudanpur and Wu, 1999;Chen, 2009;Naptali et al., 2012). | given the preponderance of highly frequent non-content words in the computation of a corpus' WER, it's not clear that a 1-2% improvement in WER would translate into an improvement in term detection. | contrasting |
train_2944 | Work by Wei and Croft (2006) and Chen (2009) take a language model-based approach to information retrieval, and again, interpolate latent topic models with N-grams to improve retrieval performance. | in many text retrieval tasks, queries are often tens or hundreds of words in length rather than short spoken phrases. | contrasting |
train_2945 | Yet, visually, the relationship in Figure 3a is clearly not linear. | the AP English data exhibits a correlation of ρ = 0.93 (Church and Gale, 1999). | contrasting |
train_2946 | We would discount the adaptation factor when DF w is low and we are unsure of This approach shows a significant improvement (0.7% absolute) over the baseline. | considering this estimate in light of the two classes of words in Figure 5, there are clearly words in Class B with high burstiness that will be ignored by trying to compensate for the high adaptation variability in the low-frequency range. | contrasting |
train_2947 | Several supervised dependency parsing algorithms (Nivre and Scholz, 2004;McDonald et al., 2005a;McDonald et al., 2005b;McDonald and Pereira, 2006;Carreras, 2007;Koo and Collins, 2010;Ma and Zhao, 2012; have been proposed and achieved high parsing accuracies on several treebanks, due in large part to the availability of dependency treebanks in a number of languages (McDonald et al., 2013). | the manually annotated treebanks that these parsers rely on are highly expensive to create, in particular when we want to build treebanks for resource-poor languages. | contrasting |
train_2948 | the log-likelihood) is given by: Maximum likelihood training chooses parameters such that the log-likelihood L(λ) is maximized. | in our scenario we have no labeled training data for target languages but we have some parallel and unlabeled data plus an English dependency parser. | contrasting |
train_2949 | The prior works showed that these models help to find some segmentations tailored for SMT, since the bilingual word occurrence feature can be captured by the character-based alignment (Och and Ney, 2003). | these models tend to miss out other linguistic segmentation patterns as monolingual supervised models, and suffer from the negative effects of erroneously alignments to word segmentation. | contrasting |
train_2950 | This does not include the cost of n-gram creation or cached lookups, which amount to ∼0.03 seconds per source word in our current implementation. | 14 the n-grams created for the NNJM can be shared with the Kneser-Ney LM, which reduces the cost of that feature. | contrasting |
train_2951 | Le's model does obtain an impressive +1.7 BLEU gain on top of a baseline without an NNLM (25.8 vs. 27.5). | when compared to the strongest baseline which includes an NNLM, Le's best models (S2T + T2S) only obtain an +0.6 BLEU improvement (26.9 vs. 27.5). | contrasting |
train_2952 | While this method learns to map word combinations into vectors, it builds on existing word-level vector representations. | we represent words as vectors in a manner that is directly optimized for parsing. | contrasting |
train_2953 | The objective as stated is not jointly convex with respect to U , V and W due to our explicit representation of the low-rank tensor. | if we fix any two sets of parameters, for example, if we fix V and W , then the combined score S γ (x, y) will be a linear function of both θ and U . | contrasting |
train_2954 | In reality other translations might also be acceptable (e.g., both street and road for Straße). | tS1000 accepts more than one correct translation. | contrasting |
train_2955 | 12, in which the edge type of each step is important. | this will increase the memory needed for calculation. | contrasting |
train_2956 | Irrespective of their relatively high performance on various semantic tasks, it is debatable whether models that have no access to visual and perceptual information can capture the holistic, grounded knowledge that humans have about concepts. | a possibly even more serious pitfall of vector models is lack of reference: natural language is, fundamentally, a means to communicate, and thus our words must be able to refer to objects, properties and events in the outside world (Abbott, 2010). | contrasting |
train_2957 | However, from a cognitive angle, it relies on strong, unrealistic assumptions: The learner is asked to establish a link between a new object and a word for which they possess a full-fledged textbased vector extracted from a billion-word corpus. | the first time a learner is exposed to a new object, the linguistic information available is likely also very limited. | contrasting |
train_2958 | Surprisingly, the very simple lin method outperforms both CCA and SVD. | nn, an architecture that can capture more complex, non-linear relations in features across modalities, emerges as the best performing model, confirming on a larger scale the recent findings of Socher et al. | contrasting |
train_2959 | Moreover, once the learner observes a new object, she can easily construct a full visual representation for it (and the acquisition literature has shown that humans are wired for good object segmentation and recognition (Spelke, 1994)) -the more challenging task is to scan the ongoing and very ambiguous linguistic communication for contexts that might be relevant and informative about the new object. | fast mapping is often described in the psychological literature as the opposite task: The learner is exposed to a new word in context and has to search for the right object referring to it. | contrasting |
train_2960 | Many existing paraphrase models introduce latent variables to describe the derivation of c from x, e.g., with transformations (Heilman and Smith, 2010;Stern and Dagan, 2011) or alignments (Haghighi et al., 2005;Das and Smith, 2009;Chang et al., 2010). | we opt for a simpler paraphrase model without latent variables in the interest of efficiency. | contrasting |
train_2961 | The lexicon allows for effective parsing, contributing to only 2% of the overall errors. | context is more challenging. | contrasting |
train_2962 | (2013) trained their model from word alignments produced by traditional unsupervised probabilistic models. | with this approach, errors induced by probabilistic models are learned as correct alignments; thus, generalization capabilities are limited. | contrasting |
train_2963 | Word embedding x t is integrated as new input information in recurrent neural networks for each prediction, but in recursive neural networks, no additional input information is used except the two representation vectors of the child nodes. | some global information , which cannot be generated by the child representations, is crucial for SMT performance, such as language model score and distortion model score. | contrasting |
train_2964 | The first uses a logistic regression model that primarily incorporates high level information about threads and posts. | forum threads have structure which is not leveraged our initial model. | contrasting |
train_2965 | The daily life of Chinese people heavily depends on Chinese input method engine (IME), no matter whether one is composing an E-mail, writing an article, or sending a text message. | every Chinese word inputted into computer or cellphone cannot be typed through one-to-one mapping of key-to-letter inputting directly, but has to go through an IME as there are thousands of Chinese characters for inputting while only 26 letter keys are available in the keyboard. | contrasting |
train_2966 | A first approach to solving smart selection is to select an entity, noun phrase, or concept that subsumes the user selection. | no single approach alone can cover the entire smart selection problem. | contrasting |
train_2967 | We would then find perhaps that Southern California is a more reasonable smart selection than of Southern California. | precisely defining such a relevance function and designing the guidelines for a user study is non-trivial and left for future work. | contrasting |
train_2968 | As we will see below, S1, S2, and S3 are error metrics, so lower scores imply better performance. | p C is a correlation metric, so higher correlation implies better performance. | contrasting |
train_2969 | Note that each of the c i values can be tuned independently because a c i value that is optimal for predicting scores for p i essays with respect to any of the error performance measures is necessarily also the optimal c i when measuring that error on essays from all prompts. | this is not case with Pearson's correlation coefficient, as the P C value for essays from all 13 prompts cannot be simplified as a weighted sum of the P C values obtained on each individual prompt. | contrasting |
train_2970 | In order to obtain an optimal result as measured by P C, we jointly tune the c i parameters to optimize the P C value achieved by our system on the same held-out validation data. | an exact solution to this optimization problem is computationally expensive, as there are too many (7 13 ) possible combinations of c values to exhaustively search. | contrasting |
train_2971 | The training and development sets were completely in full to task participants. | we were unable to download all the training and development sets because some tweets were deleted or not available due to modified authorization status. | contrasting |
train_2972 | The first relies on the judgements of human annotators Mukherjee et al., 2012). | recent studies show that deceptive opinion spam is not easily identified by human readers (Ott et al., 2011). | contrasting |
train_2973 | One contribution of the work presented here is the creation of the cross-domain (i.e., Hotel, Restaurant and Doctor) gold-standard dataset. | to existing work (Ott et al., 2011;Li et al., 2013b), our new gold standard includes three types of reviews: domain expert deceptive opinion spam (Employee), crowdsourced deceptive opinion spam (Turker), and truthful Customer reviews (Customer). | contrasting |
train_2974 | A similar pattern can also be observed when comparing Figure 2 First-Person Singular Pronouns: The literature also associates deception with decreased usage of first-person singular pronouns, an effect attributed to psychological distancing, whereby deceivers talk less about themselves due either to a lack of personal experience, or to detach themselves from the lie (Newman et al., 2003;Zhou et al., 2004;Knapp and Comaden, 1979). | according to our findings, we find the opposite to hold. | contrasting |
train_2975 | By soliciting fake reviews from participants, including crowd workers and domain experts, we have found that is possible to detect fake reviews with above-chance accuracy, and have used our models to explore several psychological theories of deception. | it is still very difficult to estimate the practical impact of such methods, as it is very challenging to obtain gold-standard data in the real world. | contrasting |
train_2976 | The bottom-up hypothesis holds that infants converge on the linguistic units of their language through a similaritybased distributional analysis of their input (Maye et al., 2002;Vallabha et al., 2007). | the top-down hypothesis emphasizes the role of higher level linguistic structures in order to learn the lower level units (Feldman et al., 2013;Martin et al., 2013). | contrasting |
train_2977 | The lexical cue assumes that a pair of words differing in the first or last segment (like [kanaX] and [kanaK]) is more likely to be the result of a phonological process triggered by adjacent sounds, than a true semantic minimal pair. | this strategy clearly gives rise to false alarms in the (albeit relatively rare) case of true minimal pairs like [kanaX] ("duck") and [kanal] ("canal"), where ([X], [l]) will be mistakenly labeled as allophonic. | contrasting |
train_2978 | This fact suggests that a dialog system should be also capable of conducting multi-topic conversations with users to provide them a more natural interaction with the system. | the majority of previous work on dialog interfaces has focused on dealing with only a single target task. | contrasting |
train_2979 | We have not tested the correctness of this variation in the scoring package used for the i2b2 shared task. | it turns out that the CEAF metric (Luo, 2005) was always intended to work seamlessly on predicted mentions, and so has been the case with the B 3 metric. | contrasting |
train_2980 | We found that the overall system ranking remained largely unchanged for both shared tasks, except for some of the lower ranking systems that changed one or two places. | there was a considerable drop in the magnitude of all B 3 scores owing to the combination of two things: (i) mention manipulation, as proposed by Cai and Strube (2010), adds singletons to account for twinless mentions; and (ii) the B 3 metric allows an entity to be used more than once as pointed out by Luo (2005). | contrasting |
train_2981 | It may be thought that inter-annotator agreement (IAA) provides implicit annotation: the higher the agreement, the easier the piece of text is for sentiment annotation. | in case of multiple expert annotators, this agreement is expected to be high for most sentences, due to the expertise. | contrasting |
train_2982 | For example, all five annotators agree with the label for 60% sentences in our data set. | the duration for these sentences has a mean of 0.38 seconds and a standard deviation of 0.27 seconds. | contrasting |
train_2983 | The sentence "it is messy , uncouth , incomprehensible , vicious and absurd" has a SAC of 3.3. | the SAC for the sarcastic sentence "it's like an all-star salute to disney's cheesy commercialism." | contrasting |
train_2984 | Of course the risk in approaching the problem as domain adaptation is that the domains are so different that the representation of a positive instance of a movie or product review, for example, will not coincide with that of a posi-tive scientific citation. | because there is a limited amount of annotated citation data available, by leveraging large amounts of annotated polarity data we could potentially even improve citation classification. | contrasting |
train_2985 | We would like to do as little feature engineering as possible to ensure that the features we use are meaningful across domains. | we do still want features that somehow capture the inherent positivity or negativity of our labeled instances, i.e., citations or Amazon product reviews. | contrasting |
train_2986 | Semi-supervised text classification algorithms proposed in (Nigam et al., 2000), (Joachims, 1999), (Zhu and Ghahramani, 2002) and (Blum and Mitchell, 1998) are a few examples of this type. | these algorithms are sensitive to initial labeled documents and hyper-parameters of the algorithm. | contrasting |
train_2987 | first topic in Table 3), then it may affect the performance of the resulting classifier. | in our approach the annotator can assign multiple class labels to a topic, hence our approach is more flexible for the annotator to encode her domain knowledge efficiently. | contrasting |
train_2988 | (2012) and Khashabi (2013) use pre-trained word embeddings as input for Matrix-Vector Recursive Neural Networks (MV-RNN) to learn compositional structures for RE. | none of these works evaluate word embeddings for domain adaptation of RE which is our main focus in this paper. | contrasting |
train_2989 | In terms of word embeddings for DA, recently, Xiao and Guo (2013) present a log-bilinear language adaptation framework for sequential labeling tasks. | these methods assume some labeled data in target domains and are thus not applicable in our setting of unsupervised DA. | contrasting |
train_2990 | The result suggests that word embeddings seem to capture different information from word clusters and their combination would be effective to generalize relation extractors across domains. | in domain cts, the improvement that word embeddings provide for word clusters is modest. | contrasting |
train_2991 | It may seem that the value of DPK is strictly in its ability to evaluate all paths, which is not explicitly accounted for by other kernels. | another view of the DPK is possible by thinking of it as cheaply calculating rule production similarity by taking advantage of relatively strict English word ordering. | contrasting |
train_2992 | The 'TargetedOnly' scores are somewhat closer to correct, with 12% and 28% of entities with incorrect final polarities. | the 'TargetedOnly' method naturally suffers from a very low recall, with only 19% and 38% of entities covered in the financial and medical domains, respectively. | contrasting |
train_2993 | Given a list of known cognates, our approach does not require any other linguistic information. | it can be customized to integrate historical information regarding language evolution. | contrasting |
train_2994 | Turkish sentences have an unmarked SOV order. | depending on the discourse, constituents can be scrambled to emphasize, topicalize and focus certain elements. | contrasting |
train_2995 | Indeed, it is very easy to construct pairs of translated sentences which involve operations outside our restricted set when transformed into each other. | we use the following method to alleviate the restrictions of the small set of operations. | contrasting |
train_2996 | The present phrasebased setting is simpler because sentences are constructed from left to right, so prefix information is unnecessary. | phrase-based translation implements reordering by allowing hypotheses that translate discontiguous words in the source sentence. | contrasting |
train_2997 | A main contribution in this paper is efficiently ignoring coverage when evaluating the language model. | syntactic machine translation hypotheses correspond to contiguous spans in the source sentence, so in prior work we simply ran the search algorithm in every span. | contrasting |
train_2998 | Continuing the example from Figure 2, we could split ( , ) into three boundary pairs: (country, ), (nations, ), and (countries, ). | it is somewhat inefficient to separately consider the low-scoring child (countries, ). | contrasting |
train_2999 | On one hand, there have been multiple reports (mainly from groups with a long history of building T2S systems) stating that systems using source-side syntax greatly out-perform phrasebased systems (Mi et al., 2008;Zhang et al., 2011;Tamura et al., 2013). | there have been also been multiple reports noting the exact opposite result that sourceside syntax systems perform worse than Hiero, S2T, PBMT, or PBMT with pre-ordering (Ambati and Lavie, 2008;Xie et al., 2011;Kaljahi et al., 2012). | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.