id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_12900
It is also worth noting that the number of search errors incurred in the coarse-to-fine approach can be dramatically reduced (at the cost of decoding time) by increasing the pruning thresholds.
the fortuitous nature of coarse-tofine search errors seems to be a substantial and desirable effect.
contrasting
train_12901
(Mao et al, 2007) used a sequential CRFs regression model to measure the polarity of a sentence in order to determine the sentiment flow of the authors in reviews.
this method must manually select a word set for constraints, where each selected word achieved the highest correlation with the sentiment.
contrasting
train_12902
In that work, we found that the projection of annotations across parallel texts can be successfully used to build a corpus annotated for subjectivity in the target language.
parallel texts are not always available for a given language pair.
contrasting
train_12903
As Joachims (2002) points out, the above type of problem is NP-Hard.
an approximate solution to finding g ik can be obtained by solving the following SVM optimization problem: In the second approach to ranking emotions, we use regression to model f i directly.
contrasting
train_12904
If max-product BP converges, we may simply output each variable's favorite value (according to its belief), if unique.
max-product BP tends to be unstable on loopy graphs, and we may not wish to wait for full convergence in any case.
contrasting
train_12905
minimal binarized grammar size.
m1 is usually not preferred in practice (Goodman, 1997).
contrasting
train_12906
In this paper, for simplicity reasons we do not consider the effect of for-statement implementations on the optimal binarization.
it is well known that reducing the number of constituents produced in parsing can greatly improve CKY parsing efficiency.
contrasting
train_12907
For example, in Figure 2, if we use left binarization, then [A B]:[0, 2] can be shared to generate both X :[0, 4] and Y :[0, 3], in which we can save one IC overall.
if right binarization is used, there will be no common ICs to share in the generation steps of X :[0, 4] and Y :[0, 3], and overall there are one more IC generated.
contrasting
train_12908
When combined with exact inference algorithms, like the iterative CKY (Tsuruoka and Tsujii, 2004), the accuracy will be the same.
if combined with other inexact pruning techniques like beam-pruning (Goodman, 1997) or coarse-to-fine parsing (Charniak et al., 2006), binarization may interact with those pruning methods in a complicated way to affect parsing accuracy.
contrasting
train_12909
So while for children or less educated adults these constructions might pose difficulties, they were favored by our assessors.
the average parse tree height negatively correlated with readability as expected, but surprisingly the correlation is very weak (-0.06).
contrasting
train_12910
To alleviate this problem, an obvious idea is to extract rules from k-best parses instead.
a k-best list, with its limited scope, has too few variations and too many redundancies (Huang, 2008).
contrasting
train_12911
The basic idea is to decompose the source (Chinese) parse into a series of tree fragments, each of which will form a rule with its corresponding English translation.
not every fragmentation can be used for rule extraction, since it may or may not respect the alignment and reordering between the two languages.
contrasting
train_12912
In tree-based extraction, for each sentence pair, each rule extracted naturally has a count of one, which will be used in maximum-likelihood estimation of rule probabilities.
a forest is an implicit collection of many more trees, each of which, when enumerated, has its own probability accumulated from of the parse hyperedges involved.
contrasting
train_12913
Their approach was globally optimised and discriminative trained.
a language model, an information source known to be crucial for obtaining good performance in SMT, was notably omitted.
contrasting
train_12914
These n-best lists are produced using algorithms tuned to remove multiple derivations of the same translation (which have previously been seen as undesirable).
it would be simple to extend our sampling based decoding algorithm to calculate the MBR estimate using BLEU, in theory providing a lower variance estimate than attained with n-best lists.
contrasting
train_12915
Since its introduction by Och (2003), minimum error rate training (MERT) has been widely adopted for training statistical machine translation (MT) systems.
mERT is limited in the number of feature weights that it can optimize reliably, with folk estimates of the limit ranging from 15 to 30 features.
contrasting
train_12916
This feature can be viewed as a soft syntactic constraint: it biases the model toward translations that respect syntactic structure, but does not force it to use them.
this more syntactically aware model, when tested in Chinese-English translation, did not improve translation performance.
contrasting
train_12917
On one hand, each conversational system has a domain model, which is the knowledge representation about its domain such as the types of objects and their properties and relations.
there are available resources about domain independent lexical knowledge (e.g., WordNet (Fellbaum, 1998)).
contrasting
train_12918
Now, the quantity AHT (T, r) is logged as part of the email management system and corresponds to the time taken to respond to a customer's email.
we do not have access to AHT (T , r) for any T = T .
contrasting
train_12919
Furthermore document-level entities can include more than one name string.
once a document-level entity has been clustered, it remains linked to entities that were a part of that initial clustering.
contrasting
train_12920
As we observed, the exact match baseline has fairly high accuracy but is obviously also too aggressive of a strategy.
for certain very famous global entities, any reference to the name (especially in corpora made of primarily news text) is likely to be a reference to a single global entity.
contrasting
train_12921
'Lady Thatcher' with 'Margaret Thatcher'); and linking entities despite spelling mistakes in a document (e.g linking 'Avenajado' with 'Robert Aventajado').
as we have already seen, the cross-document co-reference system does make mistakes and these mistakes can propagate to the within-document output.
contrasting
train_12922
We have noticed that the performance of the crossdocument co-reference system on organizations lags behind the performance of the system on people.
for LEDR, the extraction system's performance is quite similar between the two entity classes.
contrasting
train_12923
The ACE guidelines (LDC, 2008a) suggest that this distinction can be difficult to make, and in fact have a lengthy set of rules for classifying such cases.
these rules can seem unintuitive, and may be difficult for machines to learn.
contrasting
train_12924
This is also the only algorithm in the name variation component that scales quadratically with the number of name strings.
each calculation is independent, and could be done simultaneously (with enough machines).
contrasting
train_12925
Since different feature-sets and different ML approaches are used and combined for each experiment, it is not possible to present the number of features used in each experiment in Table 4.
table 5 shows the number of features and the ML approach used for each genre and NE class.
contrasting
train_12926
Moreover the quality of the performance of the different feature extraction tools such as AMIRA (for POS tagging and BPC) and MADA (for the morphological features) are optimized for NW data genres, thereby yielding suboptimal performance on the WL genre, leading to more noise than signal for training.
comparing relative performance on this genre, we see a significant jump from the most frequent baseline FreqBaseline (F β=1 =27.66) to the best baseline MLBaseline CRF (F β=1 =55.32).
contrasting
train_12927
In this paper, we experimented with one empirical and two well-known unsupervised statistical machine learning techniques: k-means and EM and evaluated their performance in generating topicoriented summaries.
the performance of these approaches depends entirely on the feature set used and the weighting of these features.
contrasting
train_12928
Previous phrase alignment work has primarily mitigated this tendency by constraining the inference procedure, for example with word alignments and linguistic features (Birch et al., 2006), or by disallowing large phrase pairs using a noncompositional constraint (Cherry and Lin, 2007;Zhang et al., 2008).
the problem lies with the model, and therefore should be corrected in the model, rather than the inference procedure.
contrasting
train_12929
(2005) has made some preliminary attempt on the idea of hierarchical semantic role labeling.
without considerations on how to utilize the characteristics of linguistically similar semantic roles, the purpose of the hierarchical system is to simplify the classification process to make it less time consuming.
contrasting
train_12930
Previous SRC systems treat all the tags equally, and view the SRC as a multi-category classification task.
we have different opinions of the traditional architecture.
contrasting
train_12931
Previous semantic role classifiers always did the classification problem in one-step.
in this paper, we did SRC in two steps.
contrasting
train_12932
The proposal distri-bution does not affect the underlying probabilistic model -Metropolis-Hastings will converge to the same underlying distribution for any non-degenerate proposal.
a well-chosen proposal distribution can substantially speed convergence.
contrasting
train_12933
In all but the simplest models there is no known closed form for the posterior distribution.
the Bayesian literature describes a number of methods for approximating the posterior P(θ | d).
contrasting
train_12934
In theory, the Gibbs samplers produce streams of samples that eventually converge on the true posterior distribution, while the Variational Bayes (VB) estimator only produces an approximation to the posterior.
as the size of the training data distribution increases the likelihood function and therefore the posterior distribution becomes increasingly peaked, so one would expect this variational approximation to become increasingly accurate.
contrasting
train_12935
This is not surprising, as the α and α specify how likely the samplers are to consider novel tags, and therefore directly influence the sampler's mobility.
in our experiments the best results are obtained in most settings with small values for α and α , usually between 0.1 and 0.0001.
contrasting
train_12936
representations, training a model on it, and using the resulting model as a new objective function.
it turns out that after a single round, improved weights due to additional training do not change the feature representation; the inference process does not yield a different outcome.
contrasting
train_12937
Table 6: Final results on EPPS English-Spanish, constrained triplet models, 10 EM iterations, compared to standard IBM model 1. eration using less than 2 GB of memory.
this shows that triggers outside the immediate context help overall translation quality.
contrasting
train_12938
Research on non-task-oriented dialogue systems like casual conversation dialogue systems ("chatbots") is on the other hand not very common, perhaps due to the many amateurs who try to build naturally talking systems using sometimes very clever, but rather unscientific methods although there are systems with chatting abilities as (Bickmore and Cassell, 2001) but concentrate on applying strategies to casual conversation rather than their automatic generation of those conversations.
we believe that the main reason is that an unrestricted domain is disproportionately difficult compared to the possible use such a system could have.
contrasting
train_12939
The proposition templates are applied in a predetermined order: for example, first a template "(noun) (wa) (adjective)" is used; next a template "(noun) (ga) (adjective)" is used.
since the generated proposition is not always a natural statement, the system uses exact matching searches of the whole phrases in a search engine to check the naturalness of each proposition.
contrasting
train_12940
In most NLP problems, the data is present in structured forms, like strings or trees, and this structural information can be effectively passed to a kernel-based learning algorithm using an appropriate kernel, like a string kernel (Lodhi et al., 2002) or a tree kernel (Collins and Duffy, 2001).
feature-based methods require reducing the data to a pre-defined set of features often leading to some loss of the useful structural information present in the data.
contrasting
train_12941
it is not obvious what the implicit features are and the authors do not describe it either.
our dependency-based word subsequence kernel, which also computes similarity between two dependency trees, is very transparent with the implicit features being simply the dependency paths.
contrasting
train_12942
In the query expansion viewpoint, an attempt to identify and decrease the proportion of unnecessary translations in a translation model may produce an effect of "selective" implicit query expansion and result in improved retrieval.
prior work on translation-based Q&A retrieval does not recognize this issue and uses the translation model as it is; essentially no attention seems to have been paid to improving the performance of the translation-based approach by enhancing the quality of translation models.
contrasting
train_12943
In terms of weighting scheme, the TextRank approach, which is more "strict" than tf-idf in eliminating unimportant words, has led comparatively higher retrieval performances on all levels of removal quantity when the translation model has been trained from the "noisy" (Q Q) corpus.
the "less strict" tf-idf approach has led better performances when the translation model has been trained from the "less noisy" (Q A) corpus.
contrasting
train_12944
Since co-occurrence is a symmetric relation, it suffices to compute half of the matrix.
for conceptual clarity and to generalize to instances where the relation may not be symmetric, the algorithm computes the entire matrix.
contrasting
train_12945
On such tasks, feature selection algorithms based on feature-class correlation have been very successful.
in the current problem, which we call 'semantic classification', there seem to be a fixed number of domain specific operative words such as 'grant', 'deny', 'moot', 'strike', etc., which, almost entirely decide the class of the docket entry, irrespective of the existence of other highly correlated features.
contrasting
train_12946
In this example, although 'deny' is rightly assigned a positive weight and 'moot' is rightly assigned a negative weight, when both features co-occur in a docket entry (as in 'deny as moot'), it makes the label negative.
8 the combined weight of the linear SVM is positive since the absolute value of the weight assigned to 'deny' is higher than that of 'moot', resulting in a net positive score.
contrasting
train_12947
As we have discussed earlier, this docket entry belongs to the OSJ category since it contains at least one OSJ event.
we see that the negative weights assigned by the SVM to the second and third sentences result in an overall negative classification.
contrasting
train_12948
Our intuition was that a decision tree makes a categorical decision at each node in the tree, hence it could capture the binary-switch like behavior of features.
the performance of the decision tree is found to be statistically indistinguishable from the linear SVM as shown in entry 4 of table 5.
contrasting
train_12949
So far, the classifiers we considered received a performance boost by piggybacking on the human selected features.
they did not take into account the polarity of these features.
contrasting
train_12950
The L 1 -regularized logistic regression was comparable to the SVM with linear kernel in this experiment.
the presented model presents the advantage that it can reduce the number of active features (features with non-zero weights assigned); the L 1 regularization can remove 74%, 48%, and 82% of substitution rules in each dataset.
contrasting
train_12951
For example, by discarding LUs occurring less than 200 times in the corpus, we obtain a +0.12 improvement in accuracy, but the coverage decreases to 57%.
uncovered LUs are also the most rare ones and their relevance in an application may be negligible.
contrasting
train_12952
We will be modelling this as another kind of MA, interaction-explicit MA, since the user needs to indicate explicitly that he wants a given suffix to be replaced, in contrast to the non-explicit positioning MA.
if the underlying MT engine providing the suffixes is powerful enough, the user would quickly realise that performing a MA is less costly that introducing a whole new word, and would take advantage of this fact by systematically clicking before introducing any new word.
contrasting
train_12953
This yielded a further average improvement in WSR of about 16% (25% relative improvement) when considering a maximum of 5 explicit MAs.
relative improvement in WSR and uMAR increase drop significantly when increasing the maximum allowed amount of explicit MAs from 1 to 5.
contrasting
train_12954
It is well known that richer context representation gives rise to better parsing performance (Johnson, 1998).
the need for tractability does not allow much internal information to be used to represent a hypothesis.
contrasting
train_12955
One way to estimate the tag pair generative probability Pr( , ′) is to manually align nodes between parallel trees, and use the manually aligned trees as the training data for maximum likelihood estimation.
this is a time-consuming and errorprone procedure.
contrasting
train_12956
For example, the compound noun, ' (corporate buyout)' contains an event noun ' (buyout)' and its accusative, ' (corporate).'
compound nouns provide no information about syntactic dependency or about case markers, so it is difficult to specify the predicate-argument structure.
contrasting
train_12957
This means players make a similar number of guesses even for difficult initial letters.
the distribution of initial letters for Free Association data reflects the relatively frequency of initial letters in English.
contrasting
train_12958
The quality of data for hard letters is considerably worse than that for easy letters.
to Categorilla and even Categodzilla, we found that the Free Association data was quite clean.
contrasting
train_12959
For example, for the word "crime", WordNet has as hyponyms "burglary" and "fraud".
it doesn't have "arson", "homicide", or "murder", which are among the 871 new pairs.
contrasting
train_12960
For example, the category "Things that accumulate in your body" is both easier to think of answers for and probably collects more useful data.
automatically creating categories with the right level of specificity is not a trivial task; our initial experiments suggested that it is easy to generate too much context, creating an uninteresting category.
contrasting
train_12961
Representative of each method, MSTParser and MaltParser gave comparable accuracies in the CoNLL-X shared task (Buchholz and Marsi, 2006).
they make different types of errors, which can be seen as a reflection of their theoretical differences (McDonald and Nivre, 2007).
contrasting
train_12962
In fact, due to parser's performance and word alignment ac-curacies, the statistics we collected from the GALE dataset, containing 10 million sentence-pairs, show that the children in the subtree VP(PP,VP) is translated monotonically 126310 times, while reordered of only 22144 times.
the hand-aligned data support the swap for 1245 times, and monotonically for only 168 times.
contrasting
train_12963
Due to its high coverage, WT assigns labels to a larger number of the instance in WN-gold than any other method.
the average rank of the correct class assignment is lower, resulting is lower MRR scores compared to Adsorption.
contrasting
train_12964
If there are no a priori preferences for cluster exemplars, the preferences are set to the median similarity (which can be thought of as the 'knee' of the objective function graph vs. number of clusters), and exemplars emerge from the message passing procedure.
shorter RP are more likely to contain base forms of relations (because longer phrases likely contain additional words specific to the sentence).
contrasting
train_12965
Results: Discarding clusters below a certain size had no significant effect on precision.
this step is still necessary for bootstrapping RE, since machine learning approaches require a sufficient number of positive examples to train the extractor.
contrasting
train_12966
However, this step is still necessary for bootstrapping RE, since machine learning approaches require a sufficient number of positive examples to train the extractor.
(Yakushiji et al., 2005) 33.70 33.10 33.40 (Mitsumori et al., 2006) 54.20 42.60 47.70 (Erkan et al., 2007 59.59 60.68 59.96 our results confirm the observation that frequently co-occurring pairs of entities are likely to stand in a fixed relation.
contrasting
train_12967
As expected, RD alone does not match combined precision and recall of state-of-the-art supervised systems.
we show better performance than expected.
contrasting
train_12968
We believe that it is important for the research community to continue to invest in building better resources in "source" languages, as it looks the most promising approach.
using a propagation approach can definitely help bootstrap the process.
contrasting
train_12969
The baseline system attempts to translate the first two phonetic characters as "wheat Georgia," whereas the other system simply deletes them.
the second sentence shows how word deletion can sacrifice adequacy for the sake of fluency, and the third sentence shows that sometimes word deletion removes words that could have been translated well (as seen in the baseline translation).
contrasting
train_12970
Statistical language processing systems for speech recognition, machine translation or parsing typically employ the Maximum A Posteriori (MAP) decision rule which optimizes the 0-1 loss function.
these systems are evaluated using metrics based on string-edit distance (Word Error Rate), ngram overlap (BLEU score (Papineni et al., 2001)), or precision/recall relative to human annotations.
contrasting
train_12971
Because a lattice may represent a number of candidates exponential in the size of its state set, it is often impractical to compute the MBR decoder (Equation 1) directly.
if we can express the gain function G as a sum of local gain functions g i , then we now show that Equation 1 can be refactored and the MBR decoder can be computed efficiently.
contrasting
train_12972
Relative to the N -best list, the translation lattice provides a better estimate of the expected BLEU score.
there are few hypotheses outside the 1000-best list which are selected by lattice MBR.
contrasting
train_12973
The results (Table 5) show that on aren, there is no degradation if we limit the maximum order of the n-grams to 3.
on zhen/enzh, there is improvement by BLEU(%) Max n-gram order aren zhen enzh 1 38.7 26.8 40.0 2 44.1 27.4 42.2 3 44.9 28.0 42.4 4 44.9 28.5 42.6 considering 4-grams.
contrasting
train_12974
This is a significant development in that the MBR decoder operates over a very large number of translations.
the current N -best implementation of MBR can be scaled to, at most, a few thousands of hypotheses.
contrasting
train_12975
This way, phrase pair extraction goes handin-hand with estimating the probabilities.
in practice, due to the huge number of possible phrase pairs, this task is rather challenging, both computationally and statistically.
contrasting
train_12976
They formulate a joint phrase-based model in which a source-target sentence pair is generated jointly.
the huge number of possible phrase-alignments prohibits scaling up the estimation by Expectation-Maximization (EM) (Dempster et al., 1977) to large corpora.
contrasting
train_12977
For example, one way to binarize the permutation {2, 1, 3, 4} is to introduce a proper split into {2, 1; 3, 4}, then recursively another proper split of {2, 1} into {2; 1} and {3, 4} into {3; 4}.
the permutation {2, 4, 1, 3} is non-binarizable.
contrasting
train_12978
In this work we also start out from a generative model with latent segmentation variables.
we find out that concentrating the learning effort on smoothing is crucial for good performance.
contrasting
train_12979
Given a small number of coreference-annotated documents and a large number of unlabeled documents, these weakly supervised learners aim to incrementally augment the labeled data by iteratively training a classifier 1 on the labeled data and using it to label mention pairs randomly drawn from the unlabeled documents as COREFERENT or NOT COREFERENT.
classifying mention pairs using such iterative approaches is undesirable for coreference resolution: since the non-coreferent mention pairs significantly outnumber their coreferent counterparts, the resulting classifiers generally have an increasing tendency to (mis)label a pair as non-coreferent as bootstrapping progresses (see Ng and Cardie (2003)).
contrasting
train_12980
A natural way to extend these unsupervised coreference models is to incorporate additional linguistic knowledge sources, such as those employed by our fully supervised resolver.
feature engineering is in general more difficult for generative models than for discriminative models, as the former typically require non-overlapping features.
contrasting
train_12981
Relations that have been exploited in supervised coreference resolution include transitivity (McCallum & Wellner, 2005) and anaphoricity (Denis & Baldridge, 2007).
there is little work to date on joint inference for unsupervised resolution.
contrasting
train_12982
In the model above, only the best annotationẑ produced by upstream stages is used for determining the optimal outputŷ.
,ẑ may be an incorrect annotation, while the correct annotation may be ignored because it was assigned a lower confidence value.
contrasting
train_12983
This relationship is use-ful, since an agent of the predicate thought is likely to be a person entity.
the nodes sailors and thought are adjacent in the dependency tree of the sentence.
contrasting
train_12984
In particular, it tells us nothing about phrases other than NPs.
for NER, we see that both self-training and learning with hints improve over the baseline.
contrasting
train_12985
This is related to joint inference (Daumé III et al., 2006).
we do not require that that a single data set be labeled for multiple tasks.
contrasting
train_12986
A common technique for learning such experts in the Weighted Majority algorithm (Littlestone and Warmuth, 1994), which weighs a mixture of experts (classifiers).
since we require a hard assignment -pick a single shared parameter set s -rather than a mixture, the algorithm reduces to picking the classifier s with the fewest mistakes in predicting domain d. This requires tracking the number of mistakes made by each shared classifier on each domain once a label is revealed.
contrasting
train_12987
These methods largely require batch learning, unlabeled target data, or available source data at adaptation.
our algorithms operate purely online and can be applied when no target data is available.
contrasting
train_12988
Starting with 5,947 different types of relations, transitive rules increase the dataset to approximately 12,000.
this increase wasn't enough to be effective in global reasoning.
contrasting
train_12989
(2006) used these latter relations in their work.
we also add new time-time links that are deduced from the logical time intervals that they describe.
contrasting
train_12990
Both before and after also showed increases in precision and recall in the three-way evaluation.
unknown did not parallel this improvement, nor are the increases as dramatic as in the twoway evaluation.
contrasting
train_12991
Chinese is a language that does not have morphological tense markers that provide explicit grammaticalization of the temporal location of situations (events or states).
in many NLP applications such as Machine Translation, Information Extraction and Question Answering, it is desirable to make the temporal location of the situations explicit.
contrasting
train_12992
Negation signals in the BioScope corpus always have one consecutive block of scope tokens, including the signal token itself.
the scope finding classifier can make predictions that result in nonconsecutive blocks of scope tokens: we observed that 54% of scope blocks predicted by the system given gold standard negation signals are nonconsecutive.
contrasting
train_12993
As a consequence, the optimization procedure may stop in a poor local optimum.
it is difficult to compute a direction that decorrelates two or more correlated feature functions.
contrasting
train_12994
For instance, the syntactic approach of Marcu et al.
(2006) can learn unlexicalized rules that insert function words in isolation, such as: as discussed in (Wang, Knight & Marcu, 2007), joint modeling of structure and lexical choice can exacerbate data sparsity, a problem that they attempt to address by tree binarization.
contrasting
train_12995
Furthermore, the most common deletion is of quotation marks, which is incorrect in most cases, even though such deletion is evidenced in the training corpus 5 .
the next most common deletions "I" and "it" are linguistically well grounded, since Spanish often drops pronouns.
contrasting
train_12996
A particular language pair could have alignments that are very unsuited to the stochastic assumptions of the IBM or HMM alignment models.
manually aligning 110 language pairs is impractical.
contrasting
train_12997
The sentences with large rank scores are chosen into the summary.
the model makes uniform use of the sentences in different documents, i.e.
contrasting
train_12998
Because the work by Murray and Renals used the same dataset, we can compare our scores directly.
rambow carried out summarization work on a different, unavailable email corpus, and so we re-implemented their summarization system for our current email data.
contrasting
train_12999
Therefore model parameters can be directly estimated from the training corpus by counting.
in our task, the correct correspondence between NL words and MR structures is unknown.
contrasting