id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_11900 | 5 Importantly, in the first case there is no improvement on fan-out or parsing complexity, while in the head-driven case there is a minimal improvement because of a single production with parsing complexity 15 without optimal binarization. | the optimal binarizations might still have a significant effect on the average case complexity, rather than the worst-case complexities. | contrasting |
train_11901 | Splitting discontinuous nodes for the coarse grammar introduces new nodes, so obviously we need to binarize after this transformation. | the coarse-to-fine approach requires a mapping between the grammars, so after reversing the transformation of splitting nodes, the resulting discontinuous trees must be binarized (and optionally Markovized) in the same manner as those on which the fine grammar is based. | contrasting |
train_11902 | We show that standard intrinsic metrics such as F-score alone do not predict the outcomes well. | we can build predictive performance functions that account for up to 50% of the variance in learning gain by combining features based on standard evaluation scores and on the confusion matrix entries. | contrasting |
train_11903 | We then show that standard evaluation metrics do not serve as good predictors of system performance for the system we evaluated. | adding confusion matrix features improves the predictive model (Section 4). | contrasting |
train_11904 | Its overall accuracy is 43%, the same as BEETLE II. | this is obviously not a good choice for a tutoring system, since students who make mistakes will never get tutoring feedback. | contrasting |
train_11905 | Since only a small number of features was included, this limits the applicability of the model we derived from this data set to the systems which make similar types of confusions. | it is still interesting to investigate whether confusion probabilities provide additional information compared to standard evaluation metrics. | contrasting |
train_11906 | As we are using text data, such intonational and prosodic cues are unavailable, as are the other rich sources of emotional cues we obtain from gesture, posture and facial expression in face-toface communication. | the prevalence of online text-based communication has led to the emergence of textual conventions understood by the users to perform some of the same functions as these acoustic and non-verbal cues. | contrasting |
train_11907 | The sad class also seems well distinguished when using hashtags as labels, although less so when using emoticons. | other emotion classes show a surprisingly high cross-class performance in many cases -in other words, they are producing disappointingly similar classifiers. | contrasting |
train_11908 | For the three classes happy, sad and perhaps anger, models trained using emoticon labels do a reasonable job of distinguishing classes in hashtag-labelled data, and vice versa. | for the other classes, discrimination is worse. | contrasting |
train_11909 | Results for sad and anger are reasonable, and provide a baseline for further experiments with more advanced features and classification methods once more manually annotated data is available for these classes. | hashtags give much better performance with these classes than the (perhaps vague or ambiguous) emoticons. | contrasting |
train_11910 | Second, to apply evidence collected from some annotations to a new annotation, the evidence must generalize across annotations. | collected evidence or statistics may vary widely across annotations. | contrasting |
train_11911 | Observing that 90% of all descendant instances of the concept 'Actors' match an annotation acted-in constitutes strong evidence that 'Actors' is a good concept for acted-in. | observing that only 0.09% of all descendant instances of the concept 'Football Teams' match won-super-bowl should not be as strong negative evidence as the percentage suggests. | contrasting |
train_11912 | As explained in Section 4.1, we used offline recognition results in our evaluation. | the results would be identical if we were to use the incremental speech recognition output of In-proTK directly. | contrasting |
train_11913 | This might be due to (i) English having more short nouns or verbs than German that are more likely to be confused with each other, and (ii) the English Wikipedia being known to attract a larger amount of non-native editors which might lead to higher rates of real-word spelling errors. | this issue needs to be further investigated e.g. | contrasting |
train_11914 | Given that many measures of contextual fitness allow at most one edit, many naturally occurring errors will not be detected. | allowing a larger edit distance enormously increases the search space resulting in increased run-time and possibly decreased detection precision due to more false positives. | contrasting |
train_11915 | However, allowing a larger edit distance enormously increases the search space resulting in increased run-time and possibly decreased detection precision due to more false positives. | to the quite challenging process of mining naturally occurring errors, creating artificial errors is relatively straightforward. | contrasting |
train_11916 | Thus, we also tested a trigram model based on Wikipedia. | it is much smaller than the Web model, which leads us to additionally testing smaller Web models. | contrasting |
train_11917 | Max and Wisniewski (2010) used similar techniques to create a dataset of errors from the French Wikipedia. | they target a wider class of errors including non-word spelling errors, and their class of real-word errors conflates malapropisms as well as other types of changes like reformulations. | contrasting |
train_11918 | the Cambridge Learner Corpus (Nicholls, 1999). | annotation of errors is difficult and costly (Rozovskaya and Roth, 2010), only a small fraction of observed errors will be real-word spelling errors, and learners are likely to make dif-ferent mistakes than proficient language users. | contrasting |
train_11919 | (1991) when evaluated on a corpus of artificial errors based on the WSJ corpus. | the results are not directly comparable, as Mays et al. | contrasting |
train_11920 | Translation models trained with weighted counts have been discussed before, and have been shown to outperform uniform ones in some settings. | researchers who demonstrated this fact did so with arbitrary weights (e.g. | contrasting |
train_11921 | As to the domain adaptation experiments, weights optimized through perplexity minimization are significantly better in the majority of cases, and never significantly worse, than uniform weights. | 12 the difference is smaller for the experiments with an adapted language model than for those with an out-of-domain one, which confirms that the benefit of language model adaptation and translation model adaptation are not fully cumulative. | contrasting |
train_11922 | 14 A pessimistic interpretation of the results would point out that performance gains compared to the best baseline system are modest or even inexistent in some settings. | we want to stress two important points. | contrasting |
train_11923 | Monolingual SCF integration based on a common representation format has already been addressed by King and Crouch (2005) and just recently by and . | neither King and Crouch (2005) nor or make use of existing standards in order to create a uniform SCF representation for lexicon merging. | contrasting |
train_11924 | DCs, and these DCs are not linked to any DCR: in the Syntax Extension, the standard only provides 7 class names, see Figure 1), complemented by 17 example attributes given in an informative, non-binding Annex F. These are by far not sufficient to represent the fine-grained SCFs available in such largescale lexicons as VerbNet. | the Syntax part of Subcat-LMF comprises 58 DCs that are properly linked to ISOCat DCs; a number of DCs were missing in ISOCat, so we entered them ourselves. | contrasting |
train_11925 | With respect to text documents, slightly modified passages in these documents can be identified using fingerprints (Potthast and Stein, 2008). | for data fields which contain natural language such as the assignee name field, string similarity metrics (Cohen et al., 2003) as well as spelling correction technology are exploited (Damerau, 1964;Monge and Elkan, 1997). | contrasting |
train_11926 | (2010) use a ratio of positive and negative word counts on Twitter, Kramer (2010) counts lexicon words on Facebook, and Thelwall (2011) uses the publicly available Sen-tiStrength algorithm to make weighted counts of keywords based on predefined polarity strengths. | to lexicons, many approaches instead focus on ways to train supervised classifiers. | contrasting |
train_11927 | This could explain the behavior seen in Figure 3 in which both the positive and negative sentiment scores rise over time. | further experimentation did not rectify this pattern. | contrasting |
train_11928 | The T form (German du, French tu) is employed towards friends or addressees of lower social standing, and implies solidarity or lack of formality. | english used to have a T/V distinction until the 18th century, using you as V pronoun and thou for T. in contemporary english, you has taken over both uses, and the T/V distinction is not marked anymore. | contrasting |
train_11929 | There is a large body of work on the T/V distinction in (socio-)linguistics and translation studies, covering in particular the conditions governing T/V usage in different languages and the difficulties in translation (Ardila, 2003;Künzli, 2010). | many observations from this literature are difficult to operationalize. | contrasting |
train_11930 | We originally intended to transfer T/V labels between German and English word-aligned pronouns. | we pronouns are not necessarily translated into pronouns; additionally, we found word alignment accuracy for pronouns to be far from perfect, due to the variability in function word translation. | contrasting |
train_11931 | To investigate this hypothesis, we trained models with the best parameters as before (8-sentence direct speech context, words as features). | this time we trained novel-specific models, splitting each novel into 50% training data and 50% testing data. | contrasting |
train_11932 | In future work, we will attempt to learn social networks from novels (Elson et al., 2010), which should provide constraints on all instances of communication between a speaker and an addressee. | the big -and unsolved, as far as we know -challenge is to automatically assign turns to interlocutors, given the varied and often inconsistent presentation of direct speech turns in novels. | contrasting |
train_11933 | A few projects attempt to represent story struc-ture in terms of both characters and their emotional states. | they operate at a very detailed level and so can be applied only to short texts. | contrasting |
train_11934 | Our intuition is that the simpler method described in , which merges each mention to the most recent possible coreferent, must be even more so. | due to the expense of annotation, we make no attempt to compare these methods directly. | contrasting |
train_11935 | Dreyer and Eisner (2011) propose an infinite Diriclet mixture model for capturing paradigms. | they do not address learning of hierarchy. | contrasting |
train_11936 | Once the probability distributions G = {G s , G m } are drawn from both Dirichlet processes, words can be generated by drawing a stem from G s and a suffix from G m . | we do not attempt to estimate the probability distributions G; instead, G is integrated out. | contrasting |
train_11937 | housekeep+er and house+keep+er). | if the word is analysed as s 2 m 2 (e.g. | contrasting |
train_11938 | Although the dataset provides word frequencies, we have not used any frequency information. | for training our model, we only chose words with frequency greater than 200. | contrasting |
train_11939 | In our experiments, we used dataset sizes of 10K, 16K, 22K words. | for final evaluation, we trained our models on 22K words. | contrasting |
train_11940 | As the prefix pre-appears at the beginning of words, it is identified as a stem. | identifying pre-as a stem does not yield a change in the morphological analysis of the word. | contrasting |
train_11941 | Clearly, adding tree cutting would improve the accuracy of the segmentation and will help us identify paradigms with higher accuracy. | the segmentation accuracy obtained without using tree cutting provides a very useful indicator to show whether this approach is promising. | contrasting |
train_11942 | At the largest training data sizes, modeling all 4 features together results in the best predictions of inflection. | using 4 separate models is worth this minimal decrease in performance, since it facilitates experimentation with the CRF framework for which the training of a single model is not currently tractable. | contrasting |
train_11943 | The BLEU score of the CRF on test is 14.04, which is low. | the system produces 19 compound types which are in the reference but not in the parallel data, and therefore not accessible to other systems. | contrasting |
train_11944 | miniature camera or miniature cameras does not occur in the training data, and so there is no appropriate phrase pair in any system (baseline, inflection, or inflection&compound-splitting). | our system with compound splitting has learned from split composita that English minia- There has been a large amount of work on translating from a morphologically rich language to English, we omit a literature review here due to space considerations. | contrasting |
train_11945 | (2010), Clifton and Sarkar (2011), and others are primarily concerned with using morpheme segmentation in SMT, which is a useful approach for dealing with issues of word-formation. | this does not deal directly with linguistic features marked by inflection. | contrasting |
train_11946 | The F, Lemma and LMM improve over the baseline in terms of unseen words for both MLE and Yamcha techniques. | for seen words, these systems do worse than or equal to the baseline when the MLE technique is used. | contrasting |
train_11947 | The proportion of ambiguity errors is almost identical for gender, number and rationality. | rationality overall is the biggest cause of error, simply due to its higher degree of ambiguity. | contrasting |
train_11948 | The table shows that each framework assigns a single role, such as Arg0 and Agent, to each syntactic argument. | we can acquire information from this sentence that John is an agent of the throwing event (the "Affection" row), as well as a source of the movement event of the ball (the "Movement" row). | contrasting |
train_11949 | This framework was also used for text generation (Habash et al., 2003). | the problem of multiple-role assignment was not completely solved on the resource. | contrasting |
train_11950 | The funcevents in the semantic structure of a verb. | generally, a verb focuses on one of those events and this makes a semantic variation among verbs such as buy, sell, and pay as well as difference of syntactic behavior of the arguments. | contrasting |
train_11951 | For single-role assignment, Theme, in our sense, in action verbs is always duplicated with Actor/Patient. | lCS strictly divides a function for action and change; therefore the duplicated Theme is correctly annotated. | contrasting |
train_11952 | Unfortunately, we cannot discover what each argument of the semantic predicates exactly means since the definition of each predicate is not tic properties of roles using a predicate decomposition approach, but defines specific roles for each conceptual event/state to represent a specific background of the roles in the event/state. | at the same time, FrameNet defines several types of parent-child relations between most of the frames and between their roles; therefore, we may say FrameNet implicitly describes a sort of decomposed property using roles in highly general or abstract frames and represents the inheritance of these semantic properties. | contrasting |
train_11953 | (2009), the initialization of our method depends on the correlation between DEOs and negative polarity items (NPIs). | our method trusts the initialization more and aggressively separates likely DEOs from spurious distractors and other words, unlike distillation, which we show to be equivalent to one iteration of EM prior re-estimation. | contrasting |
train_11954 | DLD09 are to be commended for having identified a crucial component of inference that nevertheless lends itself to a classification-based ap-proach, as we will show. | as noted by DL10, the performance of the distillation method is mixed across languages and in the semi-supervised bootstrapping setting, and there is no mathematical grounding of the heuristic to explain why it works and whether the approach can be refined or extended. | contrasting |
train_11955 | This method approximately captures Ladusaw's hypothesis by highly ranking words that appear in NPI contexts more often than would be expected by chance. | the problem with this approach is that DEOs are not the only words that co-occur with NPIs. | contrasting |
train_11956 | While originally developed in the bilingual context of Statistical Machine Translation, nothing prevents building such models on monolingual corpora. | in order to build reliable models, it is necessary to use enough training material including minimal redundancy of words. | contrasting |
train_11957 | Figure 2 shows how performance varies on French with number of training examples for various feature configurations. | some paraphrase types will require integration of more complex knowledge, as is the case, for instance, for paraphrase pairs involving some anaphora and its antecedent (e.g. | contrasting |
train_11958 | We apply this method on the opposite translation direction, thus having English as a source language and German as a target language. | we cannot simply invert the reordering rules which are applied on German as a source language in order to reorder the English input. | contrasting |
train_11959 | This ensures the correct placement of German verbs. | this does not ensure that the German verb forms are correct because of highly ambiguous English verbs. | contrasting |
train_11960 | Furthermore, the reordering rules are applied on a clause not allowing for movements across the clause boundaries. | we also showed that in some cases, the main verbs may be moved after the succeeding subclause. | contrasting |
train_11961 | If we take gold-standard edges as positive examples, and non-gold-standard edges as negative examples, the goal of the training problem can be viewed as finding a large separating margin between the scores of positive and negative examples. | it is infeasible to generate the full space of negative examples, which is factorial in the size of input. | contrasting |
train_11962 | From the perspective of correctness, it is unnecessary to find a margin between the sub-edges of e + and those of e − , since both are gold-standard edges. | since the score of an edge not only represents its correctness, but also affects its priority on the agenda, promoting the sub-edge of e + can lead to "easier" edges being constructed before "harder" ones (i.e. | contrasting |
train_11963 | (2003), focus on non-interactive one-shot instruction discourses. | commercially successful car navigation systems continuously monitor whether the driver is following the instructions and provide modified instructions in real time when necessary. | contrasting |
train_11964 | We first assume that episode boundaries occur when the street name changes from one segment to the next. | staying on the road may involve a driving maneuver (and therefore a decision point) as well, e.g. | contrasting |
train_11965 | We found that male users tended to look more at the navigation screen in the VCP condition than in B, although the difference is not statistically significant. | female users looked at the navigation screen significantly fewer times (t(5) = 3.2, p < 0.05, t-test for dependent samples) and for significantly shorter amounts of time (t(5) = 3.2, p < 0.05) in the VCP condition than in B. | contrasting |
train_11966 | The evaluation confirmed the importance of interactive real-time NLG for navigation, and we therefore see this as a key direction of future work. | it would be desirable to generate more complex referring expressions ("the tall church"). | contrasting |
train_11967 | While this type of lexical chain is described as "reiteration without identity of referents" by Morris and Hirst (1991), it would not be captured in Centering since this is not a case of strict coreference. | lexical chains do not capture types of reiterated discourse referents that have distinct morpho-syntactic realisations, e.g. | contrasting |
train_11968 | The sentence-external features lead to an improvement when combined with the language-model based ranking. | this improvement is leveled out in the BaseSyn model. | contrasting |
train_11969 | In this experiment, we find an effect of the sentence-external features over the simple sentence-internal baselines. | in the fully spelled-out, sentence-internal model, the effect is, again, minimal. | contrasting |
train_11970 | The label was supposed to cover any criticism that is not covered by a dedicated label. | the annotators reported that they chose this label when they were unsure whether a particular criticism label would fit a certain turn or not. | contrasting |
train_11971 | has the surface form of a question. | the context of the discussion revealed that the author tried to draw attention to the missing figure in the article and requested it to be filled or removed. | contrasting |
train_11972 | Using this ANOVA model, we find a highly significant main effect of the Re-alVsConstructed factor that demonstrates the general ability of the models to achieve separation between Real Pairs and Constructed Pairs; on average F(1,780) = 18.22, p < .0001. | when we look more closely, we find that although the trend is consistently to find more evidence of speech style accommodation in Real Pairs than in Constructed Pairs, we see differentiation among the models in terms of their ability to achieve this separation. | contrasting |
train_11973 | Such compositions are implemented in the toolkit TIBURON. | there are translation tasks in which the used XTOPs do not fulfill this requirement. | contrasting |
train_11974 | Our approach to composition is the same as in (Engelfriet, 1975;Baker, 1979;Maletti and Vogler, 2010): We simply parse the righthand sides of the XTOP M with the left-hand sides of the XTOP N . | to facilitate this approach we have to adjust the XTOPs M and N in two pre-processing steps. | contrasting |
train_11975 | Roughly speaking, we require that the left-hand sides of N are small enough to completely process righthand sides of M . | a comparison of left-and right-hand sides is complicated by the fact that their shape is different (left-hand sides have a state at the root, whereas right-hand sides have states in front of the variables). | contrasting |
train_11976 | However, in the first pre-processing step we might have introduced some non-linear (copying) rules in N (see rule ( ) in Example 5), and it is known that "nondeterminism [in M ] followed by copying [in N ]" is a feature that prevents composition to work (Engelfriet, 1975;Baker, 1979). | our copying is very local and the copies are only used to project to different subtrees. | contrasting |
train_11977 | Especially after reading the example it might seem useless to create the rule copies in R l [in Example 6 for l = σ(z 2 , z 3 )]. | each such rule has a distinct state at the root of the left-hand side, which can be used to trigger only this rule. | contrasting |
train_11978 | Claims are easiest to translate, yielding the highest overall BLEU score of 0.4879. | to that, all models score considerably lower on titles. | contrasting |
train_11979 | The results given in table 14 show that tuning on a pooled set of 6,000 text sections yields only minimal differences to tuning on 2,000 sentence pairs such that the BLEU scores for the new pooled models are still significantly lower than the best results in table 12 (indicated by "<"). | increasing the tuning set to 16,000 sentence pairs for IPC sections makes the pooled baseline perform as well as the best results in table 13, except for two cases (indicated by "<") (see table 15). | contrasting |
train_11980 | Syncretism is thus considered as the accidental by-product of such forces, and German case syncretism is typically analyzed according to these lines (Barðdal, 2009;Baerman, 2009, p. 229). | these forces are not explanatory: they only describe what has happened, but not why. | contrasting |
train_11981 | Compared to other semantic concordances, the granularity of PDEV is high and thus discouraging in terms of expected IAA. | selecting among patterns does not really mean disambiguating a concordance but rather determining to which pattern it is most similar-a task easier for humans than WSD is. | contrasting |
train_11982 | Traditionally, the amount of self-information contained in a tag (as a probabilistic event) depends only on the probability of that tag, and would be defined as I(t j ) = − log p 1 (t j ). | intuitively one can say that a good measure of usefulness of a particular tag should also take into consideration the expected tagging confusion related to the tag. | contrasting |
train_11983 | Traditional machine translation (MT) systems treat LCS data as noise, or just as regular sentences. | if LCS data is processed intelligently, it can provide a useful signal for training word alignment and MT models. | contrasting |
train_11984 | They use target strings in multiple languages as different views on translation. | in our work, we treat the alignment model and language model as different views of LCS data. | contrasting |
train_11985 | We can see that the word "badminton" is aligned incorrectly with word ">™Ž(Taufik)" . | in the LCS data, we see that " >™ Ž(Taufik)" and "badminton" appear in the same sentence ">™Ž badminton x³ (Taufik plays badminton so well)" and by adding the blocked constraint into the alignment model, it correctly learns that " >™Ž(Taufik)" should be aligned with something else, and it finds "Taufik" at end. | contrasting |
train_11986 | We can see that 3 techniques we proposed for word alignment all improve the machine translation result over the baseline system as well as the IBM 3 model. | although co-training has a bigger improvement on the word alignment compared with PR + , it actually has a lower BLEU score. | contrasting |
train_11987 | The word layer is connected to a convolutional output layer y t by weights summarized in the sparse matrix C. The output layer represents all possible next minimal units, where each MTU entry is only connected to neurons in the word layer representing its source and target words. | the word and MtU layers are then computed as follows: there are a number of computational issues with this model: First, we cannot efficiently factor the word layer w t into classes such as for the atomic MtU RNN model because we require all its activations to compute the MtU output layer y t . | contrasting |
train_11988 | The results ( Table 2 and Table 3) show that RNNLM performs competitively. | our approaches model translation since we use both source and target information as opposed to scoring only the fluency of the target side, such as done by RNNLM. | contrasting |
train_11989 | By using (6) as the objective function, we observed that the resulting segmentations yield promising applications in n-gram topic modeling, named entity recognition and Chinese segmentation. | in the spirit of Ries et al. | contrasting |
train_11990 | The "easiest" instances of quotation attribution problems arise when the speaker and the quote are semantically connected, e.g., through a reported speech verb like said. | in newswire text, the subject of this verb is commonly a pronoun or another uninformative anaphoric mention. | contrasting |
train_11991 | For compatibility with previous assessments, we report this score, which we call Exact Match (EM): this is the percentage of predicted speakers with the same span as the gold one. | for several quotations (about 30% in the PARC corpus) this information is of little value, since the gold mention is a pronoun, which per se does not give any useful information about the actual speaker entity. | contrasting |
train_11992 | The basic QUOTEBEFORECOREF system wrongly clusters together M 3 and M 4 as corefer-ent, and wrongly assigns M 3 as the representative speaker. | the JOINT system correctly clusters M 1 , M 2 and M 4 as coreferent. | contrasting |
train_11993 | After every n th iteration we resample η and ξ for all language models to capture the correlations. | to improve mixing time, we also resample components η k i and η l i when word i has changed event membership from type k to type l. In addition we define classes of closely related words (heuristically based on the covariance matrix) by classifying words as related when their similarity exceeds an empirically determined threshold. | contrasting |
train_11994 | (2012) extended the model of to jointly induce semantic roles and frames using the Chinese Restaurant Process, which is also used in our approach. | they did not aim at building a lexicon of semantic frames, but at distinguishing verbs that have different senses in a relatively small annotated corpus. | contrasting |
train_11995 | To measure the precision of induced semantic frames, we adopt the purity metric, which is usually used to evaluate clustering results. | the problem is that it is impossible to assign gold-standard classes to the huge number of instances. | contrasting |
train_11996 | Unlike the hand-labelled SemCor data, our automated sense labelling method is limited to the information found in the LLR used. | there are also 330 MASC instances covered by the ALC only. | contrasting |
train_11997 | In practise, our experiments turned out to be fairly insensitive to the value of this parameter, on evaluations over rare or unseen verbs. | overall accuracy would drop slightly if this cut-off was increased. | contrasting |
train_11998 | the Bohnet parser (Bohnet, 2010) employs morphological feature value pairs similar to our feature templates and Seeker and Kuhn (2013) introduces an integer linear programming framework including constraints for morphological agreement. | these works focus on dependency parsing and to the best of our knowledge, this is the first study on experimenting with atomic morphological features and their agreement in a constituency parsing. | contrasting |
train_11999 | Prop-Bank (Palmer et al., 2005) is the corpus of reference for verb-argument relations. | relations between a verb and its syntactic arguments are only a fraction of the relations present in texts. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.