id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_900
This data splitting enables linear scalability of memory sizes.
doing so complicates the update procedure and, in terms of execution speed, may Figure 1: Parallelized inner-most routine of EM clustering algorithm.
contrasting
train_901
They extracted a gazetteer for each NE category and utilized it in a NE tagger.
kazama and Torisawa (2007) extracted hyponymy relations, which are independent of the NE categories, from Wikipedia and utilized it as a gazetteer.
contrasting
train_902
Wikipedia also produced a large gazetteer of more than 550,000 entries.
comparing these gazetteers and ours precisely is difficult at this point because the detailed information such as the precision and the recall of these gazetteers were not reported.
contrasting
train_903
(2007), which describes how each node stores only those parameters relevant to the training data on each node.
some parameters need to be duplicated and thus their method is less efficient than ours in terms of memory usage.
contrasting
train_904
Using models such as Semi-Markov CRFs (Sarawagi and Cohen, 2004), which handle the features on overlapping regions, is one possible direction.
even if we utilize the current gazetteers optimally, the coverage is upper bounded at 70%.
contrasting
train_905
Specifically, if a translation entry (signatured by its Chinese and English strings) to be added is not in the baseline phrase table, we simply add the entry into the baseline table.
if the entry is already in the baseline phrase table, then we merge the entries by enforcing the translation probability as we obtain the same translation entry from two different knowledge sources (one is from parallel corpora and the other one is from the Chinese monolingual corpora).
contrasting
train_906
However, they have only considered relations between single-word phrases and single-character abbreviations.
moreover, the Hmm model is computationally-expensive and unable to exploit the data co-occurrence phenomena that we 5 the Hmm model aligns the characters in the abbreviation to the words in the full-form in an unsupervised way.
contrasting
train_907
(2007) demonstrates that the general feature space they devise achieves a rate of error reduction ranging from 48% to 88% over a chance baseline accuracy, across classification tasks of varying difficulty.
they also show that their general feature space does not generally improve the classification accuracy over subcategorization frames (see In this study, we explore a wider range of features for AVC, focusing particularly on various ways to mix syntactic with lexical information.
contrasting
train_908
Although it achieves the highest accuracy on the 2-way classification, its accuracy drops drastically as n gets bigger, indicating that SCF does not scale as well as other feature sets when dealing with larger number of verb classes.
the co-occurrence feature (CO), which is believed to convey only lexical information, outperforms SCF on every n-way classification when n ≥ 10, suggesting that verbs in the same Levin classes tend to share their neighboring words.
contrasting
train_909
Despite the Qualifications and other measures taken in the collection phase of the corpus, we believe the quality of the data remains open to question.
the Mechanical Turk framework provided additional information for each assignment, for example the time workers spent on the task.
contrasting
train_910
A high value of Time On Task thus does not necessarily mean that the worker actually spent a long time on it.
a low value indicates that he/she did only spend a short time on it.
contrasting
train_911
From this we can conclude that in the majority of cases, there was at least one quite similar answer among those for that HIT.
comparing the sentence ids is only an indicative measure, and it does not tell the whole story about agreement.
contrasting
train_912
R-1 R-2 R-SU R-L NoA not included 0.56 0.46 0.37 0.52 NoA included 0.42 0.35 0.28 0.39 Random Answers 0.13 0.01 0.02 0.09 Table 4: Answer agreement: ROUGE-1, -2, -SU and -L. The sentence agreement and ROUGE-figures do not tell us much by themselves.
they are an example of a procedure that can be used to postprocess the data and in further projects of similar nature.
contrasting
train_913
As Table 3 shows, this is different for SAT verbal analogy, where verbs are still the most important feature type and the only whose presence/absence makes a statistical difference.
this time coordinating conjunctions (with prepositions) do help a bit (the difference is not statistically significant) since SAT verbal analogy questions ask for a broader range of relations, e.g., antonymy, for which coordinating conjunctions like but are helpful.
contrasting
train_914
(2001) showed that retrieval scores from IR systems could be modeled using a Normal distribution for relevant documents and exponential distribution for non-relevant documents.
in their study, fusion results using these comparatively complex normalization approaches achieved performance no better than the much simpler CombMNZ.
contrasting
train_915
If a document's score is large in both systems, we expect it to have high probability of relevance.
as a document's score increases linearly in one source, we have no reason to expect its probability of relevance to also increase linearly.
contrasting
train_916
The numbers in parentheses are the size of the clusters described by the path from the root.
we hypothesize that more informative and useful intensional summaries might be constructed from clusters of discovered associations between attributes.
contrasting
train_917
To the best of our knowledge, Schulte im Walde (2006) is the only hard-clustering approach that previously incorporated selectional preferences as verb features.
her model was not soft-clustering, and she only used a simple approach to represent selectional preferences by WordNet's top-level concepts, instead of making use of the whole hierarchy and more sophisticated methods, as in the current paper.
contrasting
train_918
Recent work (Talbot and Osborne, 2007b) has demonstrated that randomized encodings can be used to represent n-gram counts for LMs with signficant space-savings, circumventing information-theoretic constraints on lossless data structures by allowing errors with some small probability.
the representation scheme used by our model encodes parameters directly.
contrasting
train_919
This paper focuses on machine translation.
many of our findings should transfer to other applications of language modeling.
contrasting
train_920
Our model achieved smaller improvements for the phrasal system (0.43 improvement for n=1 translations and 0.72 for the selected n=100 translations).
this improvement is encouraging given the large size of the training data.
contrasting
train_921
Although convex, this constraint is more expensive to enforce, therefore we drop it in our experiments below.
(adding the semidefinite connectedness constraint appears to be feasible on a sentence by sentence level.)
contrasting
train_922
Problem II: Statistical transliteration always chooses the translations based on probabilities.
in some cases, the correct translation may have lower probability.
contrasting
train_923
We also think that both Arg0 and Arg1 can be detected quite well relying on unlexicalized syntactic features only, that is, not knowing which are the verbal and nominal heads.
distinguishing between Arg2-4 is more dependant on the subcategorization frame of the verb, and thus more sensitive to the lack of verbal information.
contrasting
train_924
use a combination of grouped VerbNet roles (for Arg2) and PropBank roles (for the rest of arguments).
our study compares both role sets as they stand, without modifications or mixing.
contrasting
train_925
The motivation behind all these work is to exploit linguistically syntactic structure features to model the translation process.
most of them fail to utilize non-syntactic phrases well that are proven useful in the phrase-based methods (Koehn et al., 2003).
contrasting
train_926
Chiang (2005)'s hierarchal phrase-based model achieves significant performance improvement.
no further significant improvement is achieved when the model is made sensitive to syntactic structures by adding a constituent feature (Chiang, 2005).
contrasting
train_927
(2007) integrate supertags (a kind of lexicalized syntactic description) into the target side of translation model and language mod-el under the phrase-based translation framework, resulting in good performance improvement.
neither source side syntactic knowledge nor reordering model is further explored.
contrasting
train_928
This solution requires larger applicability contexts .
phrases are utilized independently in the phrase-based method without depending on any contexts.
contrasting
train_929
3) The tree number in a rule is not greater than d. In addition, we limit initial rules to have at most seven lexical words as leaf nodes on either side.
in order to extract long-distance reordering rules, we also generate those initial rules with more than seven lexical words for abstract rules extraction only (not used in decoding).
contrasting
train_930
It thereby suggests that SCFG is less effective in modelling parse tree structure transfer between Chinese and English when using Penn Treebank style linguistic grammar and under wordalignment constraints.
formal SCFG show much better performance in the formally syntax-based translation framework (Chiang, 2005).
contrasting
train_931
It clearly indicates that SRRs are very effective in reordering structures, which improve performance by 1.45 (26.07-24.62) BLEU score.
dPRs have less impact on performance in our tree sequence-based model.
contrasting
train_932
(2007) show that rule-based methods are relatively ineffective for orthographic syllabification in English.
few data-driven syllabification systems currently exist.
contrasting
train_933
This approach can be considered an SVM because the model parameters are trained discriminatively to separate correct tag sequences from incorrect ones by as large a margin as possible.
to generative HMMs, the learning process requires labeled training data.
contrasting
train_934
Numbered NB tags are more informative than standard NB tags.
neither annotation system can represent the internal structure of the syllable.
contrasting
train_935
He does not report word accuracy for his syllabification model.
his baseline L2P system is not improved by adding a syllabification model.
contrasting
train_936
Dependency trees are simpler in form than CFG trees since there are no constituent labels.
dependency relations directly model semantic structure of a sentence.
contrasting
train_937
Since we only have a single NT X in the formalism described above, we do not need to add the NT label in states.
we need to specify one of the three types of the dependency structure: fixed, floating on the left side, or floating on the right side.
contrasting
train_938
• filtered: a string-to-string MT system as in baseline.
we only keep the transfer rules whose target side can be generated by a well-formed dependency structure.
contrasting
train_939
(2006), who applied a reranked parser to a large unsupervised corpus in order to obtain additional training data for the parser; this self-training appraoch was shown to be quite effective in practice.
their approach depends on the usage of a high-quality parse reranker, whereas the method described here simply augments the features of an existing parser.
contrasting
train_940
Evaluation of deep parsing results is often reported only in terms of coverage (number of sentences which re- ceive an analysis), because, since the hand-crafted grammars are optimised for precision over coverage, the analyses are assumed to be correct.
in this experiment, we are potentially 'diluting' the precision of the grammar by using external resources to remove parses and so it is important that we have some idea of how the accuracy is affected.
contrasting
train_941
Restricting by lexical types should have the effect of reducing ambiguity further than POS tags can do, since one POS tag could still allow the use of multiple lexical items with compatible lexical types.
it could be considered more difficult to tag accurately, since there are many more lexical types than POS tags (almost 900 in the ERG) and less training data is available.
contrasting
train_942
Previous studies evaluate simulated dialog corpora using evaluation measures which can be automatically extracted from the dialog systems' logs.
the validity of these automatic measures has not been fully proven.
contrasting
train_943
In this study, we also strive to develop a prediction model of the rankings of the simulated users' performance.
our approach use human judgments as the gold standard.
contrasting
train_944
In human-computer communication, the goal of error recovery strategy is to maximize the user's satisfaction of using the system by guiding for the repair of the wrong information by human-computer interaction.
there are different approaches to improve the robustness of dialog management using n-best hypotheses.
contrasting
train_945
This previous EBDM framework chose a dialog example to maximize the utterance similarity measure.
our system generates a set of multiple dialog examples with each utterance similarity over a threshold given a specific hypothesis.
contrasting
train_946
Therefore, when c i is NEXT TASK, the discourse score is computed as where P (c i |c = top(S)) is a transition probability from the top node c on the focus stack S to the candidate node c i .
there is a problem for cases other than NEXT TASK because the graph has no backward probability.
contrasting
train_947
In most previous work the reward function is manually set, which makes it "the most hand-crafted aspect" of RL (Paek, 2006).
we learn the reward model from data, using a modified version of the PARADISE framework (Walker et al., 2000), following pioneering work by (Walker et al., 1998).
contrasting
train_948
We have also tried some other strategies that extract a larger number of templates from a DT.
the efficacy of the learned rules is quite similar to the one generated by the first method.
contrasting
train_949
SVM's results are the state-of-the-art for the Text chunking task.
using a committee of ETL classifiers, we produce very competitive results and maintain the advantages of using a rule based system.
contrasting
train_950
Moreover, it is necessary to split features into several sets, and then train several corresponding discriminative models separately and preliminarily.
jESS-CM is free from this kind of additional process, and the entire parameter estimation procedure can be performed in a single pass.
contrasting
train_951
We used the value for the best performance with the development set 2 .
it may be computationally unrealistic to retrain the entire procedure several times using 1G-words of unlabeled data.
contrasting
train_952
These facts imply that our SSL framework is rather appropriate for handling large scale unlabeled data.
aSO-semi and JESS-CM have an important common feature.
contrasting
train_953
Note that ASO-semi is also an 'indirect approach'.
our approach is a 'direct approach' because the distribution of y obtained from JESS-CM is used as 'seeds' of hidden states during MDF estimation for join PM parameters (see Section 4.1).
contrasting
train_954
We can calculate the precision (P ) of learned patterns for each relation by annotating the extracted patterns as correct/incorrect.
calculating the recall is a problem for the same reason as above.
contrasting
train_955
While calculating the true recall here is not possible, even calculating the true relative recall of the system against the baseline is not possible as we can annotate only a small sample.
following Pantel et al.
contrasting
train_956
More generally, when used on top of all other components, some of the models slightly degrade performance, as can be seen by those figures in the ablation tests which are higher than the corresponding baseline.
due to their different roles, each of the matching components might capture some unique preferences.
contrasting
train_957
That work, like ours, relies on pattern clusters.
it requires initial word seeds and targets the discovery of relationships specific for some given concept, while we attempt to discover and define generic relationships that exist in the entire domain.
contrasting
train_958
A study can develop its own relationship definitions and dataset, like (Nastase and Szpakowicz, 2003), thus introducing a possible bias; or it can accept the definition and dataset prepared by another work, like (Turney, 2006).
this makes it impossible to work on new relationship types.
contrasting
train_959
Today it is standard for web search engines to show these summaries as one or two lines of text, often with ellipses separating sentence fragments.
there is evidence that the ideal result length is often longer than the standard snippet length, and that furthermore, result length depends on the type of answer being sought.
contrasting
train_960
They used RIPPER as a classifier to detect interrogative questions and their answers and used the resulting question and answer pairs as summaries.
it did not consider contexts of questions and dependency between answer sentences.
contrasting
train_961
(See Section 3.4 for more about CRFs) Linear CRF model has been successfully applied in NLP and text mining tasks (McCallum and Li, 2003;Sha and Pereira, 2003).
our problem cannot be modeled with Linear CRFs in the same way as other NLP tasks, where one node has a unique label.
contrasting
train_962
None of these situations is ideal: the cost of building the training corpus in the former setup is high; in the latter scenario the data tends to be domain-specific, hence unsuitable for the learning of open-domain models.
recent years have seen an explosion of user-generated content (or social media).
contrasting
train_963
The general intuition is to exploit the pairwise preferences induced from the data by training on pairs of patterns, rather than independently on each pattern.
given a weight vector α, the score for a pattern x (a candidate answer) is simply the inner product between the pattern and the weight vector: the error function depends on pairwise scores.
contrasting
train_964
On the one hand, this tagset is much larger than the largest tagset used in English (from 17 tags in most unsupervised POS tagging experiments, to the 46 tags of the WSJ corpus and the about 150 tags of the LOB corpus).
our tagset is intrinsically factored as a set of dependent sub-features, which we explicitly represent.
contrasting
train_965
For example, in Hebrew, the preposition meaning "in", b-, is always prefixed to its nominal argument.
in Arabic, the most common corresponding particle is fy, which appears as a separate word.
contrasting
train_966
In the next section we describe our model's "generative story" for producing the data we observe.
we formalize our model in the context of two languages E and F. the formulation can be extended to accommodate evidence from multiple languages as well.
contrasting
train_967
At each iteration, the sampler selects a random variable X i , and draws a new value for X i from the conditional distribution of X i given the current value of the other variables: The stationary distribution of variables derived through this procedure is guaranteed to converge to the true joint distribution of the random variables.
if some variables can be jointly sampled, then it may be beneficial to perform block sampling of these variables to speed convergence.
contrasting
train_968
We notice that in general, adding Englishwhich has comparatively little morphological ambiguity -is about as useful as adding a more closely related Semitic language.
once characterto-character phonetic correspondences are added as an abstract morpheme prior (final two rows), we find the performance of related language pairs outstrips English, reducing relative error over MONO-LINGUAL by 10% and 24% for the Hebrew/Arabic pair.
contrasting
train_969
In this work, we use the morphological analyzer of MILA -Knowledge Center for Processing Hebrew (KC analyzer).
to English tagsets, the number of tags for Hebrew, based on all combinations of the morphological attributes, can grow theoretically to about 300,000 tags.
contrasting
train_970
Uniform initialization based on the simple suffixbased ambiguity class guesser yields big improvements over the uniform all-open-class initialization.
our refined initial conditions always improve the results (by as much as 40% error reduction).
contrasting
train_971
While recent work, such as GG, aim to use the Bayesian framework and incorporate "linguistically motivated priors", in practice such priors currently only account for the fact that language related distributions are sparse -a very general kind of knowledge.
our method allow the incorporation of much more fine-grained intuitions.
contrasting
train_972
Two-sided class-based models received most attention in the literature.
several different types of mixed word and class models have been proposed for the purpose of improving the performance of the model (Goodman, 2000), reducing its size (Goodman and Gao, 2000) as well as lowering the complexity of related clustering algorithms (Whittaker and Woodland, 2001).
contrasting
train_973
The worst case complexity of the exchange algorithm is quadratic in the number of classes.
input: The fixed number of clusters N c Compute initial clustering while clustering changed in last iteration do forall w ∈ V do forall c ∈ C do move word w tentatively to cluster c compute updated optimization criterion move word w to cluster maximizing optimization criterion Algorithm 1: Exchange Algorithm Outline the average case complexity can be reduced by updating only the counts which are actually affected by moving a word from one cluster to another.
contrasting
train_974
This stems from the fact that the change in log likelihood is calculated by each worker under the assumption that no other changes to the clustering are performed by other workers in this iteration.
if in each iteration only a rather small and randomly chosen subset of all words are considered for exchange, the intuition is that the remaining words still define the parameters of each cluster well enough for the algorithm to converge.
contrasting
train_975
As described in Section 5, we start out the first iteration with a random partition of the vocabulary into subsets each assigned to a specific worker.
instead of keeping this assignment constant throughout all iterations, after each iteration the vocabulary is partitioned anew so that all words from any given cluster are considered by the same worker in the next iteration.
contrasting
train_976
As shown in Table 3, adding the predictive classbased model trained on the ar webnews data set leads to small improvements in dev and nist06 scores but causes the test score to decrease.
adding the class-based model trained on the ar gigaword data set to the other class-based and the word-based model results in further improvement of the dev score, but also in large improvements of the test and nist06 scores.
contrasting
train_977
The paraphrase likelihood can be computed using Equation (1).
we find that using only the MLE based probabilities can suffer from data sparseness.
contrasting
train_978
As described in Section 3.2, we constrain that the pattern words of an English pattern e must be extracted from a partial subtree.
we do not have such constraint on the Chinese pivot patterns.
contrasting
train_979
It is advantageous to consider a space of possible narrative events and the ordering within, not a closed list.
it is worthwhile to construct discrete narrative chains, if only to see whether the combination of event learning and ordering produce scriptlike structures.
contrasting
train_980
For our example, 'started' is inflected for masculine Gender, singular Number, third person.
the noun is definite and is assigned genitive Case since it is in a possessive, idafa, construction.
contrasting
train_981
There is ample evidence that any NN followed by a JJ would make a perfectly valid Argument.
an AST structure would mask the fact that the JJ 'the-Chinese' does not modify the NN 'ministers' since they do not agree in Number 7 , and in syntactic Case, where the latter is genitive and the former is nominative.
contrasting
train_982
We hypothesize that most 'biographical' sentences will contain a reference to the target.
some of these sentences may be irrelevant to a biography; therefore, we filter them using a binary classifier that retains only 'biographical' sentences.
contrasting
train_983
One might collect training data by manually annotating a suitable corpus containing biographical and nonbiographical data about a person, as in (Zhou et al., 2004).
such annotation is labor intensive.
contrasting
train_984
These results are quite promising.
we should note that they may not necessarily represent the successful classification of biographical vs. non-biographical sentences but rather the classification of Wikipedia sentences vs. TDT4 sentences.
contrasting
train_985
In fact, when we use only the SVM regression model to rank the hypothesis sentences, without employing any classifier, then remove redundant sentences, rewrite and trim the results, we find that, interestingly, this approach also outperforms top-DUC2004, although the difference is not statistically significant.
we believe that this is an area worth pursuing in future, with more sophisticated features.
contrasting
train_986
3.1 Impact language models From the retrieval perspective, our collection is the paper to be summarized, and each sentence is a "document" to be retrieved.
unlike in the case of ad hoc retrieval, we do not really have a query describing the impact of the paper; instead, we have a lot of citation contexts that can be used to infer information about the query.
contrasting
train_987
Intuitively, the impact of a paper is mostly reflected in the citation context.
thus the estimation of the impact language model should be primarily based on the citation context C. we would like our impact model to be able to help us select impactreflecting sentences from d, thus it is important for the impact model to explain well the paper content in general.
contrasting
train_988
A simple way is to pool together all the sentences in C and use the maximum likelihood estimator, where c(w, s) is the count of w in s. One deficiency of this simple estimate is that we treat all the (extended) citation sentences equally.
there are at least two reasons why we want to assign unequal weights to different citation sentences: (1) A sentence closer to the citation label should contribute more than one far away.
contrasting
train_989
Percentage of signature terms in vocabulary The number of signature terms gives the total count of topic signatures over all the documents in the input.
the number of documents in an input set and the size of the individual documents across different sets are not the same.
contrasting
train_990
(2005) adds word repetition to their feature set.
their approach deals with all word repetitions on an equal basis, and so degrades quickly in the presence of noise words (their term for words which shared across conversations) to almost complete failure when only 1/2 of the words are shared.
contrasting
train_991
The disentanglement system's test performance decreases proportionally; mean 1-to-1 falls to 36.08, and mean loc 3 to 63.00, essentially baseline performance.
mentions are not sufficient; with only name mention and time gap features, mean 1-to-1 is 38.54 and loc 3 is 67.14.
contrasting
train_992
Improving the current model will definitely require better features for the classifier.
we also left the issue of partitioning nearly completely unexplored.
contrasting
train_993
Inductive Logic Programming (ILP) has been applied to some natural language processing tasks, including parsing (Mooney, 1997), POS disambiguation (Cussens, 1996), lexicon construction (Claveau et al., 2003), WSD (Specia et al., 2007), and so on.
to our knowledge, our work is the first effort to adopt this technique for the coreference resolution task.
contrasting
train_994
Besides, as with our entity-mention predicates described in Section 4.2, we also tried the "All-X" strategy for the entity-level agreement features, that is, whether all mentions in a partial entity agree in number and gender with an active mention.
we found this bring no improvement against the "Any-X" strategy.
contrasting
train_995
Working with automatic hand tracking, Quek (2003) automatically computes perceptually-salient gesture features, such as symmetric motion and oscillatory repetitions.
our feature representation takes the form of a vector of continuous values and is not easily interpretable in terms of how the gesture actually appears.
contrasting
train_996
Thus, the performance gains demonstrated in this paper cannot be explained by such punctuation-like phenomena; we believe that they are due to the consistent gestural themes that characterize coherent topics.
we are interested in pursuing the idea of visual punctuation in the future, so as to compare the power of visual punctuation and gestural cohesion to predict segment boundaries.
contrasting
train_997
AL has been successfully applied already for a wide range of NLP tasks, including POS tagging (Engelson and Dagan, 1996), chunking (Ngai and Yarowsky, 2000), statistical parsing (Hwa, 2004), and named entity recognition (Tomanek et al., 2007).
aL is designed in such a way that it selects examples for manual annotation with respect to a single learning algorithm or classifier.
contrasting
train_998
We refrain from calculating the overall efficiency score (Section 3) here due to the lack of generally accepted weights for the considered annotation tasks.
we require from a good selection protocol to exceed the performance of random selection and extrinsic selection.
contrasting
train_999
Those approaches focus on the combination of classifiers in order to improve the classification error rate for one specific classification task.
the focus of multi-task AL is on strategies to select training material for multi classifier systems where all classifiers cover different classification tasks.
contrasting