id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_20700
This performance drop is statistically significant out of domain.
the difference between the Eisner and Attardi algorithms is not statistically significant out of domain.
contrasting
train_20701
Recent work has shown that the combination of base parsers at learning time, e.g., through stacking, yields considerable benefits (Nivre and McDonald, 2008;Attardi and Dell'Orletta, 2009).
it is unclear how these approaches compare against the simpler ensemble models, which combine parsers only at runtime.
contrasting
train_20702
SITGs have proven to be a powerful tool in Syntax Machine Translation.
the algorithms have been proposed do not explore all the possible parse trees.
contrasting
train_20703
Using the reranker features distributed with the Brown reranker (Charniak and Johnson, 2005), which we call the "standard" set below, we obtained no overall improvement in f-score when either reranking the Berkeley parser n-best lists alone, or when the Berkeley parses were combined with the Brown parses.
it is possible that these results reflect the fact that the features used by the reranker were chosen because they improve the Brown parser, i.e., they are the result of feature selection based on reranking the Brown parser's n-best lists.
contrasting
train_20704
It would lead to deemphasizing those unusual types of information that might be being discussed as part of a post.
one might expect that those things that are unusual types of information might actually be more likely to be the in-focus information within an initiation that responses may be likely to refer to.
contrasting
train_20705
Conversational Agents have been shown to be effective tutors in a wide range of educational domains.
these agents are often ignored and abused in collaborative learning scenarios involving multiple students.
contrasting
train_20706
In all three conditions, students go through the same task plan.
the degree of social performance is varied from minimal (Task) to ideal (Human).
contrasting
train_20707
On the right side of Table 2, we notice that the human tutors (H) were rated higher on being part of the team (Integration), being more liked, being friendlier and keeping the group more socially comfortable (T.Releasing).
the social tutors (S) were rated to be friendlier and were only marginally better at being seen as part of the team.
contrasting
train_20708
Language identification has traditionally been approached with character-level language models.
the language model approach crucially depends on the length of the text in question.
contrasting
train_20709
Both approaches benefit from their ability to consider large, flexible spans of source context when making transduction decisions.
they encode this context in different ways, providing their respective models with different information.
contrasting
train_20710
One approach to string transduction is to view it as a tagging problem where the input characters are tagged with the output characters.
since sounds are often represented by multicharacter units, the relationship between the input and output characters is often complex.
contrasting
train_20711
In the first 2 passes, we use a standard bi-gram LM to generate lattices, followed by a standard tri-gram LM rescoring of lattices.
in the third pass, we generate both lattices and N-best lists based on the same bi-gram LM.
contrasting
train_20712
provement of 0.8% (about 4.8% relative) compared to the standard tri-gram lattice rescoring.
we have 0.6% absolute improvement (about 3.7% relative) compared to the standard 4gram lattice rescoring.
contrasting
train_20713
(Zitouni et al., 2005) used Arabic morphologically segmented data and claimed to have very competitive results in ACE 2003 and ACE 2004 data.
(Benajiba et al., 2008) report good results for Arabic NER on ACE 2003, 2004and 2005 segmentation.
contrasting
train_20714
This is because the token itself becomes insignificant information to the classifier.
when only punctuation separation is performed (W ord s ), the data is significantly sparse and the obtained results achieves high F-measure (77.1) only when outputs of other classifiers are used.
contrasting
train_20715
Hence, we keep the edge to indicate that the child can be used as a preferred substitute for the parent.
the edge is removed if the ratio is small (less than a threshold t, see Fig.
contrasting
train_20716
Using aggressive pruning, the list size and number of redun-dant phrase patterns are greatly reduced.
the classification accuracy does not decrease.
contrasting
train_20717
The detection of pitch accents and phrase boundaries has received significantly more research attention than the classification of accent types and phrase ending behavior.
one technique that has been used in a number of research efforts is to simultaneously detect and classify pitch accent.
contrasting
train_20718
In both systems, the frame-based log posterior vector of P (phone|acoustics) over all phones is decorrelated using the Karhunen-Loeve (KL) transform; unlike MLPs, CRFs take into account the entire label sequence when computing local posteriors.
posterior estimates from the CRF tend to be overconfident compared to MLP posteriors (Morris and Fosler-Lussier, 2009).
contrasting
train_20719
Calculating these is computationally inexpensive for many simple tasks (such as classification and regression).
marginal and MAP inference tends to be expensive for complex structured prediction models (such as the joint information extraction models of ), making semisupervised learning intractable.
contrasting
train_20720
The softmax-margin approach offers (1) a convex objective, (2) the ability to incorporate task-specific cost functions, and (3) a probabilistic interpretation (which supports, e.g., hidden-variable learning and computation of posteriors).
max-margin training and MIRA do not provide (3); risk and JRB do not provide (1); and CLL does not support (2).
contrasting
train_20721
A drawback of this approach is that it is extremely local: while decisions can be based on complex structures on the left, they can look only at a few words to the right.
our algorithm builds a dependency tree by iteratively selecting the best pair of neighbours to connect at each parsing step.
contrasting
train_20722
Each performed action changes the partial struc-tures and with it the extracted features and the computed scores.
these changes are limited to a fixed local context around the attachment point of the action.
contrasting
train_20723
Shorter edges are arguably easier to predict, and our parses builds them early in time.
it is also capable of producing long dependencies at later stages in the parsing process.
contrasting
train_20724
(2009b) study dependency treebanks for nine languages and find that all dependency structures meet the mildly ill-nested condition in the dependency treebanks for some gap degree.
they do not report the maximum gap degree or parsing complexity.
contrasting
train_20725
This is certainly true when the graph is of high quality and all paths trustworthy.
in a graph constructed from web cooccurrence statistics, this is rarely the case.
contrasting
train_20726
Ideally we would see an order on such phrases, e.g., "more brittle" has a larger negative polarity than "brittle", which in turn has a larger negative polarity than "less brittle".
this is rarely the case and usually the adjective has the highest polarity magnitude.
contrasting
train_20727
In the approach, a subjective sentence is represented as a set of words in the sentence, ignoring word order and head-modifier relation between words.
sentiment classification is different from traditional topic-based text classification.
contrasting
train_20728
Recently, several methods have been proposed to cope with the problem (Zaenen, 2004;Ikeda et al., 2008).
these methods are based on flat bag-of-features representation, and do not consider syntactic structures which seem essential to infer the polarity of a whole sentence.
contrasting
train_20729
Titov and McDonald (2008b) underline the need for unsupervised methods for aspect detection.
according to the authors, existing topic models, such as standard Latent Dirichlet Allocation (LDA) (Blei et al., 2003), are not suited to the task of aspect detection in reviews, because they tend to capture global topics in the data, rather than rateable aspects pertinent to the review.
contrasting
train_20730
For example, for many verbs, nsubj tends to start a cause path and dobj a theme path.
for "bind" that signifies a Binding event, both lead to themes, as in "A binds B".
contrasting
train_20731
Several studies have been performed on identifying PICO elements in abstracts (Demner-Fushman and Lin, 2007;Hansen et al., 2008;Chung, 2009).
all of them are reporting coarsegrain (sentence-level) tagging methods that have not yet been shown to be sufficient for the purpose of IR.
contrasting
train_20732
Moreover, there is currently no standard test collection of questions in PICO structure available for evaluation.
the most critical aspect in IR is term weighting.
contrasting
train_20733
These approaches follow the assumption that the user knows where the most relevant information is located.
(Kamps et al., 2005) showed that it is preferable to use structure as a search hint, and not as a strict search requirement The second approach consists in integrating the document structure at the indexing step by introducing a structure weighting scheme (Wilkinson, 1994).
contrasting
train_20734
There has been several studies that cover the PICO extraction problem.
as far as we know, none of them analyses and uses the positional ditribution of these elements within the documents for the purpose of IR.
contrasting
train_20735
It could also be caused by the limited number of query in our test collection.
we can determine reasonable weights by tuning each part weight separately.
contrasting
train_20736
A straightforward idea is to detect PICO elements in documents and use the elements in the retrieval process.
this approach does not work well because of the difficulty to arrive at a consistent tagging of these elements.
contrasting
train_20737
Given a query, search engines retrieve relevant pictures by analyzing the image caption (if it exists), textual descriptions found adjacent to the image, and other text-related factors such as the file name of the image.
since they do not analyze the actual content of the images, search engines cannot be used to retrieve pictures from unannotated collections.
contrasting
train_20738
The probability of a document d in a corpus is defined as: Computing the posterior distribution P(θ, z|d, α, β) of the hidden variables given a document is intractable in general.
a variety of approximate inference algorithms have been proposed in the literature including variational inference which our model adopts .
contrasting
train_20739
All are almost surprisingly intuitive, but this is not terribly surprising since Chinese and English have very similar large-scale structures (both are head initial, both have adjectives and quantifiers that precede nouns).
we see two entries in the list (starred) that correspond to an En- glish word order that is ungrammatical in Chinese: PP modifiers in Chinese typically precede the VPs they modify, and CPs (relative clauses) also typically precede the nouns they modify.
contrasting
train_20740
4aThe VSO basic word order is evident: early in the sentence, there is a strong tendency towards right movement around arguments after covering the verb.
right movement is increasingly penalized at the end of the sentence.
contrasting
train_20741
The binarization in Figure 1 is unfortunate because the obtained production cannot be factorized such that only two nonterminals occur in each rule.
the binarization of Figure 4 allows the factorization into S(U , NP ) ↔ S(U , NP ) and U : @(NP , V ) ↔ @(V , NP ), which are fully binarized productions.
contrasting
train_20742
However, the binarization of Figure 4 allows the factorization into S(U , NP ) ↔ S(U , NP ) and U : @(NP , V ) ↔ @(V , NP ), which are fully binarized productions.
in general, STSGs (or SCFGs or extended tree transducers) cannot be fully binarized as shown in Aho and Ullman (1972).
contrasting
train_20743
Consequently, we cannot compute forward or backward applications for arbitrary MBOT.
if the MBOT is equivalent to an STSG (for example, because it was constructed by the method presented before Theorem 3), then forward and backward application can be computed essentially as for STSG.
contrasting
train_20744
MBOT can efficiently be used (with computational benefits) as an alternative representation for transformations computed by STSG (or compositions of STSG).
mBOT can also compute transformations, of which the domain or range cannot be represented by a TSG.
contrasting
train_20745
Both methods were trained and tested on data from the Ziff-Davis corpus (Knight and Marcu, 2002), and they achieved very similar grammaticality and meaning preservation scores, with no statistically significant difference.
their compression rates (counted in words) were very different: 70.37% for the noisy-channel method and 57.19% for the C4.5-based one.
contrasting
train_20746
Our method is similar to Nomoto's, in that it uses two stages, one that chops the source dependency tree generating candidate compressions, and one that ranks the candidates.
we experimented with more elaborate ranking models, and our method does not employ any manually crafted rules.
contrasting
train_20747
guage model trained on a large background corpus.
language models tend to assign smaller probabilities to longer sentences; therefore they favor short sentences, but not necessarily the most appropriate compressions.
contrasting
train_20748
Extrinsic evaluations have also shown that, while extractive summaries may be less coherent than human abstracts, users still find them to be valuable tools for browsing documents (He et al., 1999;.
these same evaluations also indicate that concise abstracts are generally preferred by users and lead to higher objective task scores.
contrasting
train_20749
The system described thus far may appear extractive in nature, as the transformation step is identifying informative sentences in the conversation.
these selected sentences correspond to < participant, relation, entity > triples in the ontology, for which we can subsequently generate novel text by creating linguistic annotations of the conversation ontology (Galanis and Androutsopolous, 2007).
contrasting
train_20750
This is due to the fact that the leading sentences for these two domains do not indicate any significance, hence the Lead system just behaves like Random.
for the scientific and newswire domains, the leading sentences do have importance so the Lead system consistently outperforms Random.
contrasting
train_20751
f MMR , is not guaranteed everywhere monotone.
our theoretical results still holds for f MMR with high probability in practice.
contrasting
train_20752
We assume that there is not enough high-quality data to build a monolingual selectional preference model for the source language (shown by dotted lines).
we can use a bilingual vector space, that is, a semantic space in which words of both the source and the target language are represented, to translate each source language word s into the target language by identifying its nearest (most similar) target word tr(s): Now we can use a target language selectional preference model to obtain plausibilities for source triples: where the superscript indicates the language.
contrasting
train_20753
Data Our experiments were carried out on the CoNLL 2008 (Surdeanu et al., 2008) training dataset which contains both verbal and nominal predicates.
we focused solely on verbal predicates, following most previous work on semantic role labeling (Màrquez et al., 2008).
contrasting
train_20754
Since different similarity functions can be used within this framework, one may wish to select the one that is the most appropriate or relevant to the task considered.
a crucial requirement for this choice to be realistic is to ensure that for the family of similarity functions considered the expected similarity maximization is efficiently computable.
contrasting
train_20755
As mentioned earlier, we leave this question to future work.
we can offer a brief look at how one could tackle this question.
contrasting
train_20756
The empirical adequacy of 2-SCFG models would presumably be lower with automatically-aligned texts and if the study also included non-European languages.
phrase-based systems can properly handle inside-out alignments when used with a reasonably large distortion limit, and all configurations in Fig.
contrasting
train_20757
6 Likewise, the procedure could be applied to statistical systems that only generate k-best lists.
we would not expect the same strong performance from model combination in these constrained settings.
contrasting
train_20758
Additionally, the experiments were performed over a large corpus of messages that are not available for use by other researchers.
we use messages from the widely-available Enron email corpus (Klimt and Yang, 2004) for our own experiments.
contrasting
train_20759
This strongly suggests that without zoning, the classifier is not learning features from the training set at a useful level of generality.
once we add the zoning classifier, the top-10 unigrams and bigrams appear to correspond much better with linguistic intuitions about the language of requests.
contrasting
train_20760
There is much disagreement about the units and elementary relations of discourse structure, but they agree that the structures are hierarchical, most commonly trees (Marcu, 2000), while others have argued for directed acyclic graphs (Danlos, 2004), or general graphs (Wolf and Gibson, 2004).
most of the segmentation research to date has focused on linear segmentation, in which segments are non-overlapping and sequential, and it has been argued that this sequence model is sufficient for many purposes (Hearst, 1994).
contrasting
train_20761
The mean scores for the BIN baseline are over 50% on the encyclopedia data.
the mean score for BIN on the Choi standard data (Fig.
contrasting
train_20762
For example, SkillSum (Williams and Reiter, 2008) and ICONOCLAST (Power et al., 2003) are two contemporary generation systems that allow for specifying aspects of style such as choice of discourse marker, clause order, repetition and sentence and paragraph lengths in the form of constraints that can be optimised.
to date, these systems do not consider syntactic reformulations of the type we are interested in.
contrasting
train_20763
This pulls the average for reformulated sentences down.
on average 2 out of 7 reformulations score quite high.
contrasting
train_20764
In general, one would expect that short transformation sequences to provide good evidence of true entailments.
to account for the grandparent-child relationship in the hypothesis, TED would produce a fairly long sequence, relabeling nearby to be near, deleting the two nodes for Rossville Blvd, and then reinserting those nodes under near.
contrasting
train_20765
Settings of 0.1, 0.2, 0.3, and 0.4 led to 10-fold cross-validation The main difference between our kernel and the CTK is that we sum over all pairs of subtrees (Equation 3).
the CTK only considers only one pair of subtrees.
contrasting
train_20766
9 In accuracy values that were not significantly different from each other.
we did observe that increased search failure ( §3.4) resulted from settings above 0.5.
contrasting
train_20767
Chawathe and Garcia-Molina (1997) describe a tree edit algorithm for detecting changes in structured documents that incorporates edits for moving subtrees and reordering children.
they make assumptions unsuitable for natural language, such as the absence of re-cursive syntactic rewrite rules.
contrasting
train_20768
Lexical-syntactic rules can be automatically extracted from plain corpora (e.g., (Lin and Pantel, 2001;Szpektor and Dagan, 2008)) but the quality (also in terms of little noise) and the coverage is low.
rules written at the semantic level are more accurate but their automatic design is difficult and so they are typically hand-coded for the specific phenomena.
contrasting
train_20769
by using FrameNet semantics (e.g., like in (Burchardt et al., 2007)), it is possible to encode a lexical-syntactic rule using the KILLING and the DEATH frames, i.e.
: to use this model, specific rules and a semantic role labeler on the specific corpora are needed.
contrasting
train_20770
Again, the syntactic rules (with variables) which this kernel BNC WN WIKI RTE2 0.55 0.42 0.83 RTE3 0.54 0.41 0.83 RTE5 0.45 0.34 0.82 Table 3: Coverage of the different resources for the words of the three datasets can provide are not enough general for RTE3.
maxSSTK+WOK improves WOK on all datasets thanks to its generalization ability.
contrasting
train_20771
This may be due to the fact that, as shown in Table 1, the overall differences between partners in mixed-gender pairs are quite low, and so neither partner may be doing much turn-byturn matching.
as we expected, entrainment is least prevalent among male-male pairs.
contrasting
train_20772
This reflects the intuition that someone overly eager to be liked may be perceived as annoying and socially inept.
similarity-attraction theory states that similarity promotes attraction, and someone might therefore entrain in order to obtain his partner's social approval.
contrasting
train_20773
As we expected, giving encouragement is correlated with entrainment for all three gender groups, and trying to be liked is correlated with entrainment for male-male and female-male groups.
trying to dominate is not correlated with entrainment on any feature, and conversation awkward is actually positively correlated with entrainment on jitter.
contrasting
train_20774
A high number is negative in that it is the sign of an inefficient dialogue, one which takes many turn exchanges to accomplish the objective.
it may also be the sign of easy, flowing dialogue between the partners.
contrasting
train_20775
Besides shell language, there were other annotations relevant to essay scoring.
we ignored them for this study because they are not directly relevant to the task of shell language detection.
contrasting
train_20776
For the annotated essay test set ( §3.2), the percentage of tokens tagged as shell was 14.0% (11.6% were labeled as shell by the first annotator).
the percentage of tokens tagged as shell was 4.2% for Lincoln-Douglas, 5.4% for Kennedy-Nixon, 4.6% for Gore-Bush, and 4.8% for Obama-McCain.
contrasting
train_20777
It is not completely clear whether the smaller percentages tagged as shell are due to a lack of coverage by the shell detector or more substantial differences in the domain.
it seems that these debates genuinely include less shell.
contrasting
train_20778
Also, Bush employs a somewhat atypical sentence structure here: "It's not what I think and its not my intentions and not my plan."
the system also incorrectly tagged sequences as shell, particularly in short sentences (e.g., "Are we as strong as we should be?").
contrasting
train_20779
Discourse markers, however, are typically only single words or short phrases that express a limited number of relationships.
shell can capture longer sequences that express more complex relationships between the components of an argumentative discourse (e.g., "But let's get back to the core issue here" signals that the following point is more important than the previous one).
contrasting
train_20780
In shell detection, we focus on the lexico-syntactic level, aiming to identify the bold words as shell.
work on argumentation schemes focuses at a higher level of abstraction, aiming to classify the sentence as an attempt to persuade by appealing to an external authority.
contrasting
train_20781
There have been many attempts over the last decade to develop model-based approaches to the phrase alignment problem (Marcu and Wong, 2002;Birch et al., 2006;DeNero et al., 2008;Blunsom et al., 2009).
most of these have met with limited success compared to the simpler heuristic method.
contrasting
train_20782
e 3 e 4 e 5 e 6 e 7 Because there is a sure link at a 48 , σ f 8 = [4, 4] does not include the possible link at a 38 .
f 7 only has possible links, so σ f 7 = [5, 6] is the span containing those.
contrasting
train_20783
DeNero and Klein 2010implicitly included these constraints in their representation: instead of sets of variables, they used a structured representation that only encodes triples (a, π, σ) satisfying both the mapping π = π(a) and the structural constraint that a can be generated by a block ITG grammar.
our inference procedure, BP, requires that we represent (a, π, σ) as an assignment of values to a set of variables.
contrasting
train_20784
The BP ITG model performs comparably to the Viterbi ITG model.
because posterior decoding permits explicit tradeoffs between precision and recall, it can do much better in the recallbiased measures, even though the Viterbi ITG model was explicitly trained to maximize F 5 (DeNero and Klein, 2010).
contrasting
train_20785
One practical solution is to restrict the output vocabulary to a short-list composed of the most frequent words (Schwenk, 2007).
the usual size of the short-list is under 20k, which does not seem sufficient to faithfully represent the translation models of section 2.
contrasting
train_20786
One solution would be full-blown transliteration (Knight and Graehl, 1998), followed by application of Jaro-Winkler.
transliteration systems are complex and require significant training resources.
contrasting
train_20787
11 Because a similar corpus did not exist for development, we split the evaluation corpus into development and test sections.
the usual method of splitting by document would not confine all mentions of each entity to one side of the split.
contrasting
train_20788
First, the model learns that high edit distance is predictive of a mismatch.
singleton strings that do not match often have a lower edit distance than longer strings that do match.
contrasting
train_20789
To our knowledge, Baron and Freedman (2008) reported the only previous results on the ACE2008 data set.
they only gave gold results for English, and clustered the entire evaluation corpus (test+development).
contrasting
train_20790
Within this work, specific affect definitions vary slightly with the intention of being coherent within the application and domain and being relevant to the specific adaptation goal (Martalo et al., 2008).
affective systems researchers generally agree that disengaged users show little involvement in the interaction, and often display facial, gestural and linguistic signals such as gaze avoidance, finger tapping, humming, sarcasm, et cetera.
contrasting
train_20791
All turns are used in the disengagement detection experiments described next.
only the training problem dialogues (360, 5 per student, 6044 student turns) are used for the performance analyses in Sections 6-7, because the final test problem was given after the instruments measuring performance (survey and posttest).
contrasting
train_20792
Compared to the results in Table 2, we find that the VSM and LSA methods are very robust to recognition errors, and we only observe slight correlation decreases on these features.
the decrease for the PMI-based method is quite large.
contrasting
train_20793
Our experimental results showed that all the features obtained good correlations with human proficiency scores if there are no recognition errors in the text transcripts, with the PMI-based method performing the best over three similarity measures.
if we used ASR transcripts, we observed a marked performance drop for the PMI-based method.
contrasting
train_20794
In our experiments, we noticed that shorter conversations suffer from poor classification.
the results from the above section appear to contradict this assertion, as a 30-word window can give very good performance.
contrasting
train_20795
Though the performance gain over the random ranker has shrunk considerably, there is still some utility in using the opening of a conversation to determine its ultimate duration.
it is clear predicting duration via conversation opening is a much more difficult task overall.
contrasting
train_20796
show that on certain synthetic-data problems, this frequentist training regimen significantly reduced test-data loss compared to approximate maximum likelihood estimation (MLE).
this method has not been evaluated on real-world problems until now.
contrasting
train_20797
Finally, we hypothesized that sum-product inference may produce more accurate results in certain cases as it allows more information about different parts of the model to be exchanged.
our results show that for these three problems, sum-product and max-product inference yield statistically indistinguishable results.
contrasting
train_20798
These computations are tractable for HMMs, since the distribution q(t) = p θ (t | w) that is optimal at the E-step (which makes the inequality tight) can be represented as a lattice (a certain kind of weighted DFA), and this makes the M-step tractable via the forward-backward algorithm.
there are many extensions such as factorial HMMs and Bayesian HMMs in which an expectation under p θ (t | w) involves an intractable sum.
contrasting
train_20799
The other view is that the corpus loglikelihood is a sum over many terms of the form (2), one for each training sentence w, and we bound each summand individually using a different q φ .
neither view leads to a practical implementation in our setting.
contrasting