id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_2200
We evaluate our new model on the standard WordSim-353 (Finkelstein et al., 2001) dataset that includes human similarity judgments on pairs of words, showing that combining both local and global context outperforms using only local or global context alone, and is competitive with stateof-the-art methods.
one limitation of this evaluation is that the human judgments are on pairs Figure 1: An overview of our neural language model.
contrasting
train_2201
We used 5,000 iterations because this is the software's default setting; evaluating the trace output suggests it only takes several hundred iterations to "burn in".
we ran 8 chains for 25,000 iterations of the colloc model; as expected the results of this run are within two standard deviations of the results reported above.
contrasting
train_2202
Psychological proposals have suggested that children may discover that particular social cues help in establishing reference (Baldwin, 1993;Hollich et al., 2000), but prior modeling work has often assumed that cues, cue weights, or both are prespecified.
the models described here could in principle discover a wide range of different social conventions.
contrasting
train_2203
At the root node trying, CLN(trying) is 6 because there are six crossing-links under its sub-span: (e 1 j 4 , e 2 j 3 ), (e 1 j 4 , e 4 j 2 ), (e 1 j 4 , e 5 j 1 ), (e 2 j 3 , e 4 j 2 ), (e 2 j 3 , e 5 j 1 ) and (e 4 j 2 , e 5 j 1 ).
ccLN(trying) is 5 because (e 4 j 2 , e 5 j 1 ) falls under its child node play, thus does not count towards ccLN of trying.
contrasting
train_2204
our ranking reordering model indeed significantly reduces the crossing-link numbers over the original sentence pairs.
the performance of the ranking reorder model still fall far short of oracle, which is the lowest crossing-link number of all possible permutations allowed by the parse tree.
contrasting
train_2205
But with phrase matching in Chinese, it must be modeled explicitly.
we cannot simply perform covered ngram matching as a post processing step.
contrasting
train_2206
It has been shown to give better correlations than BLEU for many European languages including English (Callison-Burch et al., 2011).
its use of POS tags and synonym dictionaries prevents its use at the character-level.
contrasting
train_2207
Some sample sentences taken from the IWSLT test set are shown in Table 4 (some are simplified from the original).
the Cilin dictionary correctly identified the following as synonyms: the dictionary fails to recognize the following synonyms: 一个 = 个 a 这儿 = 这里 here partial awards are still given for the matching characters 这 and 个.
contrasting
train_2208
In the current formulation of TESLA-CELAB, two n-grams X and Y are either synonyms which completely match each other, or are completely unrelated.
the linear-programming based TESLA metric allows fractional similarity measures between 0 (completely unrelated) and 1 (exact synonyms).
contrasting
train_2209
We introduce a function m(X) that assigns a weight in [0, 1] for each n-gram X.
accordingly, our objective function is replaced by: where Z is a normalizing constant so that the metric has a range of [0,1] experiments with different weight functions m(•) on the test data set failed to find a better weight function than the currently implied m(•) = 1.
contrasting
train_2210
This is probably due to the linguistic characteristics of Chinese, where human judges apparently give equal importance to function words and content words.
tESLA-M found discounting function words very effective for English and other European languages such as German.
contrasting
train_2211
(2011) claimed that TESLA tuning performed better than BLEU tuning according to human judgment.
in the WMT 2011 "tunable metrics" shared pilot task, this did not hold (Callison-Burch et al., 2011).
contrasting
train_2212
This metric is similar to Spearman's ρ (Spearman, 1904).
we have found that ρ punishes long-distance reorderings too heavily.
contrasting
train_2213
In statistical learning theory, it is assumed that the training and test datasets are drawn from the same distribution, or in other words, they are from the same domain.
bilingual corpora are only available in very limited domains and building bilingual resources in a new domain is usually very expensive.
contrasting
train_2214
So the list of rules coming from each model for a cell in CKY chart is normalized before getting mixed with other phrase-table rules.
experiments showed changing the scores with the normalized scores hurts the BLEU score radically.
contrasting
train_2215
The feature set of these hypotheses are expanded to include one feature set for each table.
for the corresponding feature values of those phrase-tables that did not have a particular phrase-pair, a default log probability value of 0 is assumed (Bertoldi and Federico, 2009) which is counter-intuitive as it boosts the score of hypotheses with phrase-pairs that do not belong to all of the translation tables.
contrasting
train_2216
Besides, this model is formal syntax-based and does not need to specify the syntactic constituents of subphrases, so it can directly learn synchronous context-free grammars (SCFG) from a parallel text without relying on any linguistic annotations or assumptions, which makes it used conveniently and widely.
it is often desirable to consider syntactic constituents of subphrases, e.g.
contrasting
train_2217
Unlike some of the discriminative baselines, which require expensive operations 9 on 9 It is true that in order train our system, one must parse large amounts of training data, which can be costly, though it only needs to be done once.
even with observed training trees, the discriminative algorithms must still iteratively perform expensive operations (like parsing) for each sentence, and a new model must be trained for new types of negative data.
contrasting
train_2218
We see that our system prefers the reference much more often than the 5-GRAM language model.
11 we also note that the easiness of the task is correlated with the quality of translations (as measured in BLEU score).
contrasting
train_2219
Moreover, the authors only tested for the specific language pair of English embedded in German texts.
our work considers more than 200 languages, and the portions of embedded text are larger: up to the paragraph level to accommodate the reality of multilingual texts.
contrasting
train_2220
The translation quality of the SMT system is highly related to the coverage of translation models.
no matter how much data is used for training, it is still impossible to completely cover the unlimited input sentences.
contrasting
train_2221
Take the sentence pair in Figure 2 as an example, two initial phrase pairs PP 1 = "那 只 蓝色 手提包 ||| 那 个 蓝色 手提包" and PP 2 = "对 那 只 蓝色 手提包 有 兴趣 ||| 很 感 兴趣 那 个 蓝色 手提包" are identified, and PP 1 is contained by PP 2 , then we could form the rule: 对 X 1 有 兴趣  很 感 兴趣 X 1 to have interest very feel interest The extracted paraphrase rules aim to rewrite the input sentences to an MT-favored form which may lead to a better translation.
it is risky to directly replace the input sentence with a paraphrased sentence, since the errors in automatic paraphrase substitution may jeopardize the translation result seriously.
contrasting
train_2222
Both tried to capture the MT-favored structures from bilingual corpus.
a clear difference is that Sun et al.
contrasting
train_2223
EEM requires 10GB of memory and cannot handle words with more than 200,000 MDSs: for UF we left the SAT solver running for a week without ever terminating.
it takes about 4 hours if we limit the set Figure 1: Human classification of (in)consistent words.
contrasting
train_2224
de Oliveira (2011) modeled the similarity between the model and candidate summaries as a maximum bipartite matching problem, where the two summaries are represented as two sets of nodes and precision and recall are cal- culated from the matched edges.
none of the AESOP metrics currently apply deep linguistic analysis, which includes discourse analysis.
contrasting
train_2225
observed that coherent texts preferentially follow certain relation patterns.
simply using such patterns to measure the coherence of a text can result in feature sparseness.
contrasting
train_2226
Most previous studies on POS tagging have focused on how to extract more linguistic features or how to adopt supervised or unsupervised approaches based on a single evaluation measure, accuracy.
with a different viewpoint for errors on POS tagging, there is still some room to improve the performance of POS tagging for subsequent NLP tasks, even though the overall accuracy can not be much improved.
contrasting
train_2227
The loss function L c (y i , y j ) is designed to reflect the categories in Table 1.
the structure of POS tags can be represented as a more complex structure.
contrasting
train_2228
The feature set of our model is fundamentally a combination of the features used in the state-of-the-art joint segmentation and POS tagging model (Zhang and Clark, 2010) and dependency parser (Huang and Sagae, 2010), both of which are used as baseline models in our experiment.
we must carefully adjust which features are to be activated and when, and how they are combined with which action labels, depending on the type of the features because we intend to perform three tasks in a single incremental framework.
contrasting
train_2229
Irrespective of the existence of the dictionary features, the joint model SegTagDep largely increases the POS tagging and dependency parsing accuracies (by 0.56-0.63% and 2.34-2.44%); the improvements in parsing accuracies are still significant even compared with SegTag+Dep' (the pipeline model with the look-ahead features).
when the external dictionaries are not used ("wo/dict"), no substantial improvements for segmentation accuracies were observed.
contrasting
train_2230
However, when the external dictionaries are not used ("wo/dict"), no substantial improvements for segmentation accuracies were observed.
when the dictionaries are used ("w/dict"), the segmentation accuracies are now improved over the baseline model SegTag consistently (on every trial).
contrasting
train_2231
The partially joint model SegTag+TagDep is shown to perform reasonably well in dependency parsing: with dictionaries, it achieved the 2.02% improvement over SegTag+Dep, which is only 0.32% lower than SegTagDep.
whereas Seg-Tag+TagDep showed no substantial improvement in tagging accuracies over SegTag (when the dictionaries are used), SegTagDep achieved consistent improvements of 0.46% and 0.58% (without/with dic- tionaries); these differences can be attributed to the combination of the relieved error propagation and the incorporation of the syntactic dependencies.
contrasting
train_2232
Dynamic programming techniques based on Markov assumptions, such as Viterbi decoding, cannot handle those 'non-local' constraints as discussed above.
it is possible to constrain Viterbi decoding by 'local' constraints, e.g.
contrasting
train_2233
2 English POS tagging 2.1 Explore deterministic constraints Suppose that, following (Chomsky, 1970), we distinguish major lexical categories (Noun, Verb, Adjective and Preposition) by two binary features: A word occurring in between a preceding word the and a following word of always bears the feature +N.
consider the annotation guideline of English Treebank (Marcus et al., 1993) instead.
contrasting
train_2234
Viterbi algorithm is widely used for tagging, and runs in O(nT 2 ) when searching in an unconstrained space.
consider searching in a constrained space.
contrasting
train_2235
For example, after decoding with BMES, 4 consecutive characters associated with the tag sequence BMME compose a word.
after decoding with IB, characters associated with BIII may compose a word if the following tag is B or only form part of a word if the following tag is I.
contrasting
train_2236
As shown in Table 8, when tagset IB is used for character tagging, high precision predictions can be made by the deterministic constraints that are learned with respect to this tagset.
when tagset BMES is used, the learned constraints don't always make reliable predictions, and the overall precision is not high enough to constrain a probabilistic model.
contrasting
train_2237
As shown in Table 9, when the beam-width is reduced from 5 to 1, the tagger (beam=1) is 3 times faster but tagging accuracy is badly hurt.
when searching in a constrained space rather than the raw space, the constrained tagger (beam=5) is 10 times fast as the baseline and the tagging accuracy is even moderately improved, increasing to 97.20%.
contrasting
train_2238
Following Huang (2008), this algorithm traverses a parse forest in a bottom-up manner.
it determines and keeps the best derivation for every grammar rule instance instead of for each node.
contrasting
train_2239
This algorithm is more complex than the approximate decoding algorithm of Huang (2008).
its efficiency heavily depends on the size of the parse forest it has to handle.
contrasting
train_2240
Therefore, the ratio of constituents to distituents is not constant across sentence lengths.
by virtue of the log-linear model, LLCCM assigns positive probability to all spans or contexts without explicit smoothing.
contrasting
train_2241
Performance of the monolingual MT-based method in paraphrase generation is limited by the large-scale paraphrase corpus it relies on as the corpus is not readily available .
bilingual parallel data is in abundance and has been used in extracting paraphrase (Bannard and Callison-Burch, 2005;Zhao et al., 2008b;Callison-Burch, 2008;Kok and Brockett, 2010;Kuhn et al., 2010;Ganitkevitch et al., 2011).
contrasting
train_2242
The most related work to ours is the boostVSM introduced by (He et al., 2007b), it proposes to weight different term dimensions with corresponding bursty scores.
it is still based on term dimensions and fails to deal with terms with multiple bursts.
contrasting
train_2243
Some methods try to design time decaying functions (Yang et al., 1998), which decay the similarity with the increasing of time gap between two documents.
it requires efforts for function selection and parameters tuning.
contrasting
train_2244
It is possible that a discussant may be replying to another poster but expressing an attitude towards a third entity or discussant.
as a simplifying assumption, similar to the work of (Hassan et al., 2010), we adopt the view that replies in the sentences that are determined to be attitudinal and contain secondperson pronouns (you, your, yourself) are assumed to be directed towards the recipients of the replies.
contrasting
train_2245
The letter-frequency distribution of running key ciphertexts is notably flatter than than the plaintext distribution, unlike substitution ciphers where the frequency profile remains unchanged, modulo letter substitutions.
the ciphertext letter distribution is not uniform; there are peaks corresponding to letters (like I) that are formed by high-frequency plaintext/key pairs (like E and E).
contrasting
train_2246
While (Ng and Jordan, 2002) showed that NB is better than SVM/logistic regression (LR) with few training cases, we show that MNB is also better with short documents.
to their result that an SVM usually beats NB when it has more than 30-50 training cases, we show that MNB is still better on snippets even with relatively large training sets (9k cases).
contrasting
train_2247
This likely reflects that certain topic keywords are indicative alone.
in both tables 2 and 3, adding bigrams always improved the performance, and often gives better results than previously published.
contrasting
train_2248
However, in both tables 2 and 3, adding bigrams always improved the performance, and often gives better results than previously published.
8 This presumably reflects that in sentiment classification there are 8 adding trigrams hurts slightly.
contrasting
train_2249
Another way is to evaluate only on the 51% of sentences for which our conversion from gold CCG derivations is perfect (CLEAN).
even on this set our conversion introduces errors, as the parser output may contain categories that are harder to convert.
contrasting
train_2250
As shown earlier, CLEAN does not completely factor out the errors introduced by our conversion, as the parser output may be more difficult to convert, and the calculation of PROJ only roughly factors out the errors.
the results do suggest that the performance of the CCG parsers is approaching that of the Petrov parser.
contrasting
train_2251
From a practical point of view, we show that an induced TIG provides modeling performance superior to TSG and comparable with TIG 0 .
we show that the grammars we induce are compact yet rich, in that they succinctly represent complex linguistic structures.
contrasting
train_2252
For example, a sentence either comes from newswire, or weblog, but not both.
this poses several problems.
contrasting
train_2253
"NO" entailment in both directions) are not present in the annotation.
this is the only available dataset suitable to gather insights about the viability of our approach to multi-directional CLTE recognition.
contrasting
train_2254
These approaches align at the syntactic level (using CFGs and dependencies respectively).
to the above approaches, we assume the existence of grammars and use a semantic representation as the appropriate level for cross-lingual processing.
contrasting
train_2255
Our system thus depends on a range of existing technologies.
these technologies are available for a range of languages, and we use them for efficient extension of linguistic resources.
contrasting
train_2256
They use heuristics to align words and translations, while we use a learning based approach to find translations.
to previous work described above, we exploit surface patterns differently as a soft constraint, while requiring minimal human intervention to prepare the training data.
contrasting
train_2257
The definitions are extended by definitions of neighbor senses to discover more overlapping words.
exact word matching is lossy.
contrasting
train_2258
Accordingly we are interested in extracting latent semantics from sense definitions to improve elesk.
the challenge lies in that sense definitions are typically too short/sparse for latent variable models to learn accurate semantics, since these models are designed for long documents.
contrasting
train_2259
Grenager and Manning (2006) use an ordering of the linking of semantic roles and syntactic relations.
as the space of possible linkings is large, language-specific knowledge is used to constrain this space.
contrasting
train_2260
The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources.
evaluating such resources has turned out to be a non-trivial task, slowing progress in the field.
contrasting
train_2261
A major limitation of the list compiled by Shetty and Adibi (2004) is that it only covers those "core" employees for whom the complete email inboxes are available in the Enron dataset.
it is also interesting to determine whether we can predict the hierarchy of other employees, for whom we only have an incomplete set of emails (those that they sent to or received from the core employees).
contrasting
train_2262
Natural language questions have become popular in web search.
various questions can be formulated to convey the same information need, which poses a great challenge to search systems.
contrasting
train_2263
Indeed, Wong and Dras (2011a) claims that Information Gain is a better criteria.
this metric requires a probabilistic formulation of the grammar, which 2DOP does not supply.
contrasting
train_2264
As the number of learners of English is constantly growing, automatic error correction of ESL learners' writing is an increasingly active area of research.
most research has mainly focused on errors concerning articles and prepositions even though tense/aspect errors are also important.
contrasting
train_2265
Of these our system detected 61 and successfully corrected 52 instances.
of the second most frequent error type (using simple past instead of simple present), with 94 instances in the corpus, our system only detected 9 instances.
contrasting
train_2266
Blogs and forums are widely adopted by online communities to debate about various issues.
a user that wants to cut in on a debate may experience some difficulties in extracting the current accepted positions, and can be discouraged from interacting through these applications.
contrasting
train_2267
In such applications, users are asked to provide their own opinions about selected issues.
it may happen that the debates become rather complicated, with several arguments supporting and contradicting each others.
contrasting
train_2268
Their approach can thus be applied to comparably small data sets.
they are restricted to a specific type of relations whereas here the entire bandwidth of discourse relations that are explicitly realized in a language are covered.
contrasting
train_2269
Similar observations can be made with respect to Chambers and Jurafsky (2009) and Kasch and Oates (2010), who also study a single discourse relation (narration), and are thus more limited in scope than the approach described here.
as their approach extends beyond pairs of events to complex event chains, it seems that both approaches provide complementary types of information and their results could also be combined in a fruitful way to achieve a more detailed assessment of discourse relations.
contrasting
train_2270
Between these fine and coarse-grained approaches, event identification requires grouping references to the same event.
strict coreference is hampered by the complexity of event semantics: poison, murder and die may indicate the same effective event.
contrasting
train_2271
or "Who are the teammates of Lionel Messi at FC Barcelona?".
factual knowledge is highly ephemeral: Royals get married and divorced, politicians hold positions only for a limited time and soccer players transfer from one club to another.
contrasting
train_2272
The reason is, that the types of worksForClub distinguish the patterns well from other relations.
isMar-riedTo's patterns interfere with other person-person relations making constraints a decisive asset.
contrasting
train_2273
That is because the joint model's ILP decides with binary variables on which patterns to accept.
label propagation addresses the inherent uncertainty by providing label assignments with confidence numbers.
contrasting
train_2274
This is the setting usual pattern-based approaches without cleaning stage are working in.
for the standard setting (coinciding with Table 1's left column) stage 3 yields less precision, but higher recall.
contrasting
train_2275
Overall, these tests suggest that in general, the 'use-everything' approach is better for accurate classification of Hittite tablet fragments with larger CTH texts.
in some cases, when the fragments in question have a large number of Sumerograms and Akkadograms, using them exclusively may be the right choice.
contrasting
train_2276
(2003) -yield good AA performance.
lDA does not model authors explicitly, and we are not aware of any previous studies that apply author-aware topic models to traditional AA.
contrasting
train_2277
Existing methods were designed for data from single domain, assuming that either view alone is sufficient to predict the target class accurately.
this view-consistency assumption is largely violated in the setting of domain adaptation where training and test data are drawn from different distributions.
contrasting
train_2278
Topic modeling with a tree-based prior has been used for a variety of applications because it can encode correlations between words that traditional topic modeling cannot.
its expressive power comes at the cost of more complicated inference.
contrasting
train_2279
This is not a problem for SparseLDA because s is shared across all tokens.
we can achieve computational gains with an upper bound on s, A sampling algorithm can take advantage of this by not explicitly calculating s. Instead, we use s as proxy, and only compute the exact s if we hit the bucket s (Algorithm 1).
contrasting
train_2280
These represent reasonable disambiguations.
to previous approaches, inference speeds up as topics become more semantically coherent (Boyd-Graber et al., 2007).
contrasting
train_2281
More than 70% of the unique tokens appear less than 5 times in WCorpus.
over half of the tokens appear more than or equal to 5 times in the CCorpus.
contrasting
train_2282
As character could preserve more meanings than word in Chinese, it seems that a character can be wrongly aligned to many English words by the aligner.
we found this can be avoided to a great extent by the basic features (co-occurrence and distortion) used by many alignment models.
contrasting
train_2283
Statistical machine translation (SMT) systems, require parallel corpora of sentences and their translations, called bitexts, which are often not sufficiently large.
for many closely-related languages, SMT can be carried out even with small bitexts by exploring relations below the word level.
contrasting
train_2284
, q e t o $ i Certainly, translation cannot be adequately modeled as simple transliteration, even for closelyrelated languages.
the strength of phrasebased SMT (Koehn et al., 2003) is that it can support rather large sequences (phrases) that capture translations of entire chunks.
contrasting
train_2285
Sentence pair count e 2 9 f 3 e 2 2 f 2 e 1 e 2 1 f 1 f 2 In this way, VB controls the overfitting that would otherwise occur with rare words.
higher values of α can be chosen if smoothing is desired, for instance in the case of the alignment probabilities, which state how likely a word in position i of the English sentence is to align to a word in position j of the French sentence.
contrasting
train_2286
The language model was a trigram model with modified Kneser-Ney smoothing (Kneser and Ney, 1995;Chen and Goodman, 1998), trained on the target Table 2: Forest-to-string translation outperforms tree-tostring translation according to Bleu, but the decreases parsing accuracy according to labeled-bracket F1.
when we train to maximize labeled-bracket F1, forest-to-string translation yields better parses than both tree-to-string translation and the original parser.
contrasting
train_2287
The MER trainer requires that each list contain enough unique translations (when maximizing Bleu) or source trees (when maximizing labeled-bracket F1).
because one source tree may lead to many translation derivations, the n-best list may contain only a few unique source trees, or in the extreme case, the derivations may all have the same source tree.
contrasting
train_2288
This might result from the phenomenon alluded to in Section 4, where additional data sometimes degrades performance for unsupervised analyzers.
the Lee segmenter's gain on Levantine (18%) is higher than its gain on Small MSA (13%), even though Levantine has more data (1.5M vs. 1.3M words).
contrasting
train_2289
For example, Dahlmeier and Ng (2011) proposed a method that combines a native corpus and a GE tagged learner corpus and it outperformed models trained with either a native or GE tagged learner corpus alone.
methods which train a GEC model from various GE tagged corpora have received less focus.
contrasting
train_2290
Convolution kernels support the modeling of complex syntactic information in machinelearning tasks.
such models are highly sensitive to the type and size of syntactic structure used.
contrasting
train_2291
Recently, researchers started to exploit content information in data-driven diffusion models Petrovic et al., 2011;Zhu et al., 2011).
most of the data-driven approaches assume that in order to train a model and predict the future diffusion of a topic, it is required to obtain historical records about how this topic has propagated in a social network (Petrovic et al., 2011;Zhu et al., 2011).
contrasting
train_2292
Consequently, they need to rely on a model-driven approach instead of a datadriven approach.
our work focuses on the prediction of explicit diffusion behaviors.
contrasting
train_2293
Labels A0, A1, and A2 are complement case roles and over 85% of them survive with their predicates.
for modifier arguments (AM-X), survival ratios are down to lower than 65%.
contrasting
train_2294
We found that summarization system ranking, based on scores for multiple topics, was surprisingly stable and didn't change signifi- cantly when several topics were removed from consideration.
on a summary level, removing topics scored by the most inconsistent assessors helped ROUGE-2 increase its correlation with human metrics.
contrasting
train_2295
The work on morphologically rich languages suggests that using comprehensive morphological dictionaries is necessary for achieving good results (Hajič, 2000;Erjavec and Džeroski, 2004).
such dictionaries are constructed manually and they cannot be expected to be developed quickly for many languages.
contrasting
train_2296
Position of error within a word ( Figure 5) In en keystroke, Deletion errors at the word-initial position are the most common, while Insertion and Substitution errors tend to occur both at the beginning and the end of a word.
in en common, all error types are more prone to occur word-medially.
contrasting
train_2297
Our overall results are comparable to what Huang and Zhao (2007) report.
the consistency is quickly falling for longer words: on unigrams, f-scores range from 0.81 to 0.90 (the same as the overall results).
contrasting
train_2298
One challenge is that MST parsing itself is not incremental, making it expensive to identify loops during hypothesis expansion.
shiftreduce parsing is naturally incremental and can be seamlessly integrated into left-to-right phrasebased decoding.
contrasting
train_2299
Therefore, the possible improvements resulted from those pipeline approaches are quite limited.
instead of directly merging TM matched phrases into the source sentence, some approaches (Biç ici and Dymetman, 2008;Simard and Isabelle, 2009) simply add the longest matched pairs into SMT phrase table, and then associate them with a fixed large probability value to favor the corresponding TM target phrase at SMT decoding.
contrasting