id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_2600 | A very interesting work of Mohammad and Turney (2010) uses Mechanical Turk in order to build the lexicon of emotions evoked by words. | we present an automatic approach that infers the general connotation of words. | contrasting |
train_2601 | We presented an Egyptian to English MT system. | to previous work, we used an automatic conversion method to map Egyptian close to MSA. | contrasting |
train_2602 | Recent work (Zhao and Gildea, 2010) described an extension to the HMM with a fertility model, using MCMC techniques for parameter estimation. | they do not have a efficient means of MAP inference, which is necessary in many applications such as machine translation. | contrasting |
train_2603 | The structure of the fertility model violates the Markov assumptions used in this dynamic programming method. | we may empirically estimate the posterior distribution using Markov chain Monte Carlo methods such as Gibbs sampling (Zhao and Gildea, 2010). | contrasting |
train_2604 | 1 Once Upon a Time... For years, the standard way to do statistical machine translation parameter tuning has been to use minimum error-rate training, or MERT (Och, 2003). | as researchers started using models with thousands of parameters, new scalable optimization algorithms such as MIRA (Watanabe et al., 2007;Chiang et al., 2008) and PRO (Hopkins and May, 2011) have emerged. | contrasting |
train_2605 | Ideally, the learning algorithm should be able to recover from overshooting. | once monsters are encountered, they quickly start dominating, with no chance for PRO to recover since it accumulates n-best lists, and thus also monsters, over iterations. | contrasting |
train_2606 | We can see in Table 3 that all selection mechanisms considerably improve BLEU compared to the baseline PRO, by 2-3 BLEU points. | not every selection alternative gets rid of monsters, which can be seen by the large lengths and low BLEU+1 for the negative examples (in bold). | contrasting |
train_2607 | Large variance between the results obtained with MIRA has also been reported (Simianer et al., 2012). | none of this work has focused on monsters. | contrasting |
train_2608 | For the first day ( ), the model functions the same as a standard DPM, i.e., all the topics use the same base measure, ( ). | for later days ( ), besides the base measure, ( ), we make use of topics learned from previous days as priors. | contrasting |
train_2609 | When available, using in-project training data proves significantly more successful than using out-of-project data. | we find that when using out-of-project data, a dataset based on more words than code performs consistently better. | contrasting |
train_2610 | as information source to produce or derive interpretations based on them. | existing uncertainty cues are ineffective in social media context because of its specific characteristics. | contrasting |
train_2611 | Only 'hat' refers unambigu-ously. | the language and scenario facilitate scalar implicature (Horn, 1972;Harnish, 1979;Gazdar, 1979). | contrasting |
train_2612 | show that their agents' linguistic behavior is broadly Gricean. | their agents' language is too simple to reveal implicatures. | contrasting |
train_2613 | We are not the first researchers to use lexicalized features for coreference resolution. | pre- vious work has evaluated the benefit of lexical features only for broad-coverage data sets. | contrasting |
train_2614 | It is typically applicable in the text generation field, both for concept-to-text generation and text-totext generation (Lapata, 2003), such as multiple document summarization (MDS), question answering and so on. | ordering a set of sentences into a coherent text is still a hard and challenging problem for computers. | contrasting |
train_2615 | Nahnsen (2009) employed features which were based on discourse entities, shallow syntactic analysis, and temporal precedence relations retrieved from VerbOcean. | the model does not perform well on datasets describing the consequences of events. | contrasting |
train_2616 | Like sentence (3) and (4) in Figure 1, they have a highest cosine similarity of 0.2240 and most overlap words of "Israel" and "nuclear". | the similarity or overlap of the two sen-tences does not help to decide which sentence should be arranged before another. | contrasting |
train_2617 | This can be attributed to two factors: (i) the PCTB annotates non-local dependencies using traces, and (ii) Chinese syntax generates more traces than English syntax (Guo et al., 2007). | for parsers that do not return traces they are a benign error. | contrasting |
train_2618 | These methods usually rely heavily on the manually annotated treebanks for training the dependency models. | annotating syntac- Figure 1: Different grammar formalisms of syntactic structures between CTB (upper) and CDT (below). | contrasting |
train_2619 | They also justified that the popular "early update" (Collins and Roark, 2004) is valid for the systems that do not exhibit spurious ambiguity 4 . | for the easy-first algorithm or more generally, systems that exhibit spurious ambiguity, even "early update" could fail to ensure validity of update (see the example in figure 1). | contrasting |
train_2620 | Work on opinion and sentiment tends to focus on explicit expressions of opinions. | many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. | contrasting |
train_2621 | We report the results on 2011 only, because even when the same team participated in more than one year, the metrics submitted were different and the 2011 results represent the best effort of these teams. | as we saw in Table 1, in 2011 there were very few significant differences between the top summarization systems. | contrasting |
train_2622 | Because of its dependence on strings, it performs better with larger sets of model summaries. | to ROUGE, pyramid scoring is robust with as few as four or five model summaries (Nenkova and Passonneau, 2004). | contrasting |
train_2623 | The TDRM approach captures topics that are centered around a specific information need, often with a limited vocabulary, which favors word co-occurrence. | topics learned on entire collections are coarser than ours, which leads to lower coherence scores. | contrasting |
train_2624 | For that purpose, numerous procedures have been proposed (Milligan and Cooper, 1985). | none of the listed methods were effective or adaptable to our specific problem. | contrasting |
train_2625 | Ideally, each query subtopic should be represented by a unique cluster containing all the relevant Web pages inside. | this task is far from being achievable. | contrasting |
train_2626 | First results are provided in Table 1 and evidence that the best configurations for different < p, K, S(W ik , W jl ) > tuples are obtained for high values of p, K ranging from 4 to 6 clusters and P M I steadily improving over SCP . | such a fuzzy configuration is not satisfactory. | contrasting |
train_2627 | Although supervised CWS models (Xue, 2003;Zhao et al., 2006;Zhang and Clark, 2007;Sun, 2011) proposed in the past years showed some reasonably accurate results, the outstanding problem is that they rely heavily on a large amount of labeled da-ta. | the production of segmented Chinese texts is time-consuming and expensive, since hand-labeling individual words and word boundaries is very hard (Jiao et al., 2006). | contrasting |
train_2628 | We note that when the number of texts changes from 0 to 50,000, the f-score and OOV both are improved. | when unlabeled data changes to 200,000, the performance is a bit decreased, while still better than not using unlabeled data. | contrasting |
train_2629 | These supervised methods show good results, however, are unable to incorporate information from new domain, where OOV problem is a big challenge for the research community. | unsupervised word segmentation Peng and Schuurmans (2001); Goldwater et al. | contrasting |
train_2630 | Their approach falls into an offline approach, which focuses on creating dictionaries by extracting new words from large corpora separately before WS. | offline approaches have limitation unless the lexicon is constantly updated. | contrasting |
train_2631 | This is why the result of +LM-S+LM-P is not shown for Chinese. | replacing LM-S with LM-P improved the performance significantly. | contrasting |
train_2632 | We found positive changes such as * 欧 麦/尔 萨 利 赫 oumai/ersalihe to 欧 麦 尔/萨 利 赫 oumaier/salihe "Umar Saleh" and * 领导/人 曼德拉 lingdao/renmandela to 领导人/曼德拉 lingdaoren/mandela"Leader Mandela". | considering the overall F-measure increase and proper noun F-measure decrease suggests that the effect of LM projection is not limited to proper nouns but also promoted finer granularity because we observed proper noun recall increase. | contrasting |
train_2633 | For the automated processing of spoken communication in these scenarios, a speech recognition system must be able to handle code switches. | the components of speech recognition systems are usually trained on monolingual data. | contrasting |
train_2634 | Furthermore, the integration of additional features as input is rather straightforward due to their structure. | factored language models (FLMs) have been used successfully for languages with rich morphology due to their ability to process syntactical features, such as word stems or part-of-speech tags (Bilmes and Kirchhoff, 2003;El-Desoky et al., 2010). | contrasting |
train_2635 | For the FLM, it leads to no improvement to add the language identifier as feature. | clustering the words into their languages on the output layer of the RNNLM leads to lower perplexities. | contrasting |
train_2636 | For all k and k , k-th column vector in WS are aligned k -th column vector in WT . | we can not measure similarity between the topic proportions because we do not have any language resources such as dictionary. | contrasting |
train_2637 | To mimimize the objective, gradient descend can be used. | but that is not convex function, we only obtained local optimal. | contrasting |
train_2638 | This is reasonable because we can assume that both languages share the same latent concept. | we cannot quantify the similarity between the topics because we do not have any external language resources such as a dictionary. | contrasting |
train_2639 | (2011) carefully explored review-related features based on content and sentiment, training a semi-supervised classifier for opinion spam detection. | the disadvantages of standard supervised learning methods are obvious. | contrasting |
train_2640 | Note that this is only a limitation of our inference procedure, not the model, and future work will look at other ways (e.g., Gibbs sampling) to perform inference. | generating Y and Z given X, such that the joke is funny, is still a formidable challenge that a lot of humans are not able to perform successfully (cf. | contrasting |
train_2641 | However, intermediate distances beyond the n-gram model limits can be very useful and should not be discarded. | distant-bigram models and distance-dependent trigger models make use of both, distance and co-occurrence, information up to window sizes of ten to twenty. | contrasting |
train_2642 | If this is not the case, then the text is likely to make little sense. | if this is the case, then the taboo meaning is potentially expanded to the phrase level. | contrasting |
train_2643 | Several other studies report results only on light verb constructions formed with certain light verbs (Stevenson et al., 2004;Tan et al., 2006;Tu and Roth, 2011). | we aimed to identify all kinds of LVCs, i.e. | contrasting |
train_2644 | In case of a large number of participating systems each assessor ranks only a subset of MT outputs. | a fair overall ranking cannot be always derived from such partial rankings (Callison-Burch et al., 2012). | contrasting |
train_2645 | (2010) proposed a methodology for classifying more details than was possible in the study by Sammons et al.. | these studies were based on only English data sets. | contrasting |
train_2646 | The above studies rely on the high coverage of the original bilingual knowledge and a specific data source together with the translation vocabularies, co-occurrence information and language links. | the severest problem is that they cannot understand semantic information. | contrasting |
train_2647 | There is a large body of work around WSD and translation selection. | many of these approaches require lexical resources or large bilingual corpora with rich information fields and annotations, as reviewed in section 2. | contrasting |
train_2648 | However, aligned corpora can be difficult to obtain for under-resourced language pairs, and are expensive to construct. | documents in a comparable corpus comprise bilingual or multilingual text of a similar nature, and need not even be exact translations of each other. | contrasting |
train_2649 | For the precision metric, wiki-lsi scored 0.650 when all 80 input sentences are tested, while the base-freq baseline scored 0.550. goog-tr has the highest precision at 0.797. | if only the Chinese and Malay inputs -which has less presence on the Internet and 'less resource-rich' than English -were tested (since goog-tr cannot accept Iban inputs), wiki-lsi and goog-tr actually performs equally well at 0.690 precision. | contrasting |
train_2650 | An example is given in Figure 2, where the English word seems align to two Hindi words hE and lagawA. | from the small amount of labeled training data (i.e., a set of handcorrected tree pairs), we can learn what kind of source words are likely to align to multiple target words, and which target word is likely to the head. | contrasting |
train_2651 | Sign languages (SL), the vernaculars of deaf people, are complete, rich, standalone communication systems which have evolved in parallel with oral languages (Valli and Lucas, 2000). | in contrast to the last ones, research in automatic SL processing has not yet managed to build a complete, formal definition oriented to their automatic recognition (Cuxac and Dalle, 2007). | contrasting |
train_2652 | As the table shows, increasing the number of folds results in higher BLEU scores. | doing such will generally lead to higher variance among base learners. | contrasting |
train_2653 | Although the translation is correct in this situation, translating the Chinese word "lingyu" to "waters" appears very few times since the common translations are "areas" or "fields". | simply filtering out this kind of sentence pairs may lead to some loss of native English expressions, thereby the trans-lation performance is unstable since both nonparallel sentence pairs and non-literal but parallel sentence pairs are filtered. | contrasting |
train_2654 | Although the grammar may have rules to translate these two phrases, they can be safely pruned for this particular sentence pair. | to chart pruning for monolingual parsing, our pruning decisions are based on the source context, its target translation and the mapping between the two. | contrasting |
train_2655 | Moreover, for the frame "攻 占" in the input sentence, the MEANT-tuned system has correctly translated the ARG0 "哈玛斯好战 份子" into "Hamas militants" and the ARG1 "加 萨 走 廊" into "Gaza". | the TER-tuned system has dropped the predicate "施行" so that the corresponding arguments "The Palestinian Authority" and "into a state of emergency" have all been incorrectly associated with the predicate "攻 占 /seized". | contrasting |
train_2656 | In answer ranking phase, TI-QA considers the probabilities of different answer types as well: On one hand, TD-QA can achieve relative high ranking precision, as using a unique answer type greatly reduces the size of the candidate list for ranking. | as the answer-typing model is far from perfect, if prediction errors happen, TD-QA can no longer give correct answers at all. | contrasting |
train_2657 | However, as the answer-typing model is far from perfect, if prediction errors happen, TD-QA can no longer give correct answers at all. | tI-QA can provide higher answer coverage, as it can extract answer candidates with multiple answer types. | contrasting |
train_2658 | Since question-answer pairs are usually short, the word mismatching problem is especially important. | due to the lexical gap between questions and answers as well as spam typically existing in user-generated content, filtering and ranking answers is very challenging. | contrasting |
train_2659 | For example, they convert a 1, 000, 000-dimensional vector of word space into a 1000 × 1000 matrix. | in our model, a document is still represented by a vector. | contrasting |
train_2660 | For instance, in the British National Corpus, time and see are more frequent than thing or may and man is more frequent than part. | it seems intuitively right to say that time, see and man are more 'precise' concepts than thing, may and part respectively. | contrasting |
train_2661 | 1 In Section 2, we defined semantic content as a notion encompassing various referential properties, including a basic concept of extension in cases where it is applicable. | we do not know of a dataset providing human judgements over the general informativeness of lexical items. | contrasting |
train_2662 | In biased-SVM, it is necessary to run SVM many times, as we searched "c" and "j". | mCDC does not require such parameter tuning. | contrasting |
train_2663 | Antonio Roque (2012) has challenged Ramsay's claim, and certainly there has been successful work done in the computational analysis and modeling of narratives, as we will review in the next section. | we believe that most previous work (except possibly (Elsner, 2012)) has failed to directly address the root cause of Ramsay's skepticism: can computers extract the emotions encoded in a narrative? | contrasting |
train_2664 | Not surprisingly, Claudius draws the most negative sentiment from Hamlet, receiving a score of -27. | gertrude is very well liked by Hamlet (+24), which is unexpected (at least to us) since Hamlet suspects that his mother was involved in murdering King Hamlet. | contrasting |
train_2665 | In the case of deep neural embeddings, for example, training time can number in days. | learned embeddings are becoming more abundant, as much research and computing effort is being invested in learning word representations using large-scale deep architectures trained on web-scale corpora. | contrasting |
train_2666 | min (φ iφj − φ i φ j ) 2 ∀i, j (where, with some abuse of notation, φ andφ are the source and target embeddings respectively). | this objective is no longer convex in the embeddings. | contrasting |
train_2667 | Adj + NP It is common practice to extract any NP modified by a sentiment adjective. | this simple extraction rule suffers from precision problems. | contrasting |
train_2668 | Quotes are used in news articles as evidence of a person's opinion, and thus are a useful target for opinion mining. | labelling each quote with a polarity score directed at a textually-anchored target can ignore the broader issue that the speaker is commenting on. | contrasting |
train_2669 | In sentiment analysis over product reviews, polarity labels are commonly used because the target, the product, is clearly identified. | for quotes on topics of debate, the target and meaning of polarity labels is less clear. | contrasting |
train_2670 | These approaches assume that each document has a single source (the document's author), whose communicative goal is to evaluate a well-defined target, such as a product or a movie. | this does not hold in news articles, where the goal of the journalist is to present the viewpoints of potentially many people. | contrasting |
train_2671 | Bag-of-words (BOW) is now the most popular way to model text in machine learning based sentiment classification. | the performance of such approach sometimes remains rather limited due to some fundamental deficiencies of the BOW model. | contrasting |
train_2672 | 2012;Shi et al., 2010;Prettenhofer and Stein 2010). | the binary classification task is very different from the regression task studied in this paper, and the proposed methods in the above previous works cannot be directly applied. | contrasting |
train_2673 | REG_STC: It combines REG_S and REG_T by averaging their prediction values. | the above regression methods do not perform very well due to the unsatisfactory machine translation quality and the various language expressions. | contrasting |
train_2674 | Thus, each example in the unlabeled set is required to be checked by training a new regression model utilizing the example. | the model training process is usually very time-consuming for many regression algorithms, which significantly limits the use of the work in (Zhou and Li, 2005). | contrasting |
train_2675 | Suppose we would have sense tagged data, p(S|w, c) could have been computed as: But since the sense tagged corpus is not available, we cannot find #(S, w, c) from the corpus directly. | we can estimate it using the comparable corpus in other language. | contrasting |
train_2676 | Therefore, combining additional information with the LMs could reduce recognition errors. | direct integration of such information in the decoder is difficult. | contrasting |
train_2677 | An available treebank is a major resource for syntactic parsing. | it is often a key bottleneck to acquire credible treebanks. | contrasting |
train_2678 | Future systems might use the same or a similar feature set to ours, but in an architecture that does not include any generative parser. | some systems might indeed incorporate this generative model's score. | contrasting |
train_2679 | But, Φ phrase+deps+gen is significantly better than Φ phrase+gen only on F 1 , but not on UAS or LAS. | on the out-of-domain BROWN tests, we find that adding Φ deps always adds considerably, and in a statistically significant way, to both LAS and UAS. | contrasting |
train_2680 | This result is perhaps counter-intuitive, in the sense that one might have supposed that higher-order dependency features, being highly specific by nature, might only have only served to over-fit the training material. | this result shows otherwise. | contrasting |
train_2681 | While closed-form solutions have been developed for some specialized components (Martins et al., 2011), this problem is in general more difficult than the one arising in the subgradient algorithm. | the following result, proved in Martins et al. | contrasting |
train_2682 | We present an unsupervised approach to part-of-speech tagging based on projections of tags in a word-aligned bilingual parallel corpus. | to the existing state-of-the-art approach of Das and Petrov, we have developed a substantially simpler method by automatically identifying "good" training sentences from the parallel corpus and applying self-training. | contrasting |
train_2683 | (2012) build supervised POS taggers for 22 languages using the TNT tagger (Brants, 2000), with an average accuracy of 95.2%. | many widelyspoken languages -including Bengali, Javanese, and Lahnda -have little data manually labelled for POS, limiting supervised approaches to POS tagging for these languages. | contrasting |
train_2684 | Algorithm 2 describes this process of self training and revision, and assumes that the parallel source-target corpus has been word aligned, with many-to-one alignments removed, and that the sentences are sorted by alignment score. | to Algorithm 1, all sentences are used, not just the 60k sentences with the highest alignment scores. | contrasting |
train_2685 | Finally, the keywords S ⊆ t, with |S| ≤ k, are chosen by maximizing the cumulative reward function over all the topics, formulated as follows: Since R(S) is submodular, the greedy algorithm for maximizing R(S) is shown as Algorithm 1 on the next page, with r {w},z being similar to r S,z with S = {w}. | if λ = 1, the reward function is linear and only measures the topical similarity of words with the main topics of t. when 0 < λ < 1, as soon as a word is selected from a topic, other words from the same topic start having diminishing gains. | contrasting |
train_2686 | They find that these features only contribute less than 2% to precision. | in our approach linguistic features are quite useful. | contrasting |
train_2687 | Relatively low perplexity has made modified Kneser-Ney smoothing (Kneser and Ney, 1995;Chen and Goodman, 1998) a popular choice for language modeling. | existing estimation methods require either large amounts of RAM (Stolcke, 2002) or machines (Brants et al., 2007). | contrasting |
train_2688 | We have found that, even with mostlysequential access, memory mapping is slower because the kernel does not explicitly know where to read ahead or write behind. | we use dedicated threads for reading and writing. | contrasting |
train_2689 | We note that interpolation (Equation 2) used the different backoff b(w n−1 1 ) and so b(w n 1 ) is not immediately available. | the backoff values were saved in suffix order ( §3.3) and interpolation produces probabilities in suffix order. | contrasting |
train_2690 | To avoid the need for hard decisions about domain membership, some have used topic modeling to improve SMT performance, e.g., using latent semantic analysis (Tam et al., 2007) or 'biTAM' (Zhao and Xing, 2006). | to our source language approach, these authors use both source and target information. | contrasting |
train_2691 | For instance, in the task of tagging mail addresses, a feature of "5 consecutive digits" is highly indicative of a POSTCODE. | in the alignment model, it does not make sense to design features based on a hard-coded state, say, a feature of "source word lemma matching target word lemma" fires for state index 6. | contrasting |
train_2692 | While the participants in that study were younger (7 to 9+ years old), the study is relevant because the challenges those young participants face, are faced again when readers of any age encounter new and complicated texts that present words they do not know, and ideas they have never considered. | there is ample work on the basic algorithm to place a sequence of words in a typesetting area with a certain width, commonly known as the optimal line breaking problem (e.g., , Knuth and Plass (1981)). | contrasting |
train_2693 | It should be noted that when a text is typeset into an area of width of a certain number of characters, an erroneous break need not necessarily lead to an actual break in the final output, that is an error may not be too bad. | a missed break while not hurting the readability of the text may actually lead to a long segment that may eventually worsen raggedness in the final typesetting. | contrasting |
train_2694 | Each example has more than one annotation and we need to determine a single sense tag for each example. | if we assign senses by majority voting, we need a backoff strategy in case of ties. | contrasting |
train_2695 | Columns V and B show respectively whether the underspecified senses are a result of majority voting or backoff. | to volunteers, turkers disprefer the underspecified option and most of the English underspecified senses are assigned by backoff. | contrasting |
train_2696 | Syntax-based vector spaces are used widely in lexical semantics and are more versatile than word-based spaces (Baroni and Lenci, 2010). | they are also sparse, with resulting reliability and coverage problems. | contrasting |
train_2697 | The sentence Karen threw her arms round my neck, spilling champagne everywhere contains the LU throw.v evoking the frame BODY MOVEMENT. | throw.v is ambiguous and may also evoke CAUSE MOTION. | contrasting |
train_2698 | The evaluation of whole-sentence semantic structures plays an important role in semantic parsing and large-scale semantic structure annotation. | there is no widely-used metric to evaluate wholesentence semantic structures. | contrasting |
train_2699 | Evaluating such structures is necessary for semantic parsing tasks, as well as semantic annotation tasks which create linguistic resources for semantic parsing. | there is no widely-used evaluation method for whole-sentence semantic structures. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.