id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14100
(2014), we obtained the syntactic category of the connectives from the list provided in Knott (1996).
different from Lin et al.
contrasting
train_14101
(Rieser and Schlangen, 2011)) means that repair detection should operate without unnecessary processing overhead, and function efficiently within an incremental framework.
such left-to-right operability on its own is not sufficient: in line with the principle of strong incremental interpretation (Milward, 1991), a repair detector should give the best results possible as early as possible.
contrasting
train_14102
This example illustrates that constrained decode reinforces the errors from the baseline.
the training materials for partial-label learning are purely the encoded knowledge, which is not impacted by the baseline model error.
contrasting
train_14103
In this research we employ a sausage constraint to encode the knowledge for Chinese word segmentation.
a sausage constraint does not reflect the legal label sequence.
contrasting
train_14104
This approach has the advantage of being able to use a partially annotated corpus.
if performance of lexical normalization is crucial, we have to use the standard perceptron algorithm.
contrasting
train_14105
For the first utterance, if punctuation prediction is performed first, it might break the utterance both before and after "uh" so that the second-stage disfluency prediction will treat the whole utterance as three sentences, and thus may not be able to detect any disfluency because each one of the three sentences is legitimate on its own.
for the second utterance, if disfluency prediction is performed first, it might mark "I am sorry" as disfluent in the first place and remove it before passing into the second-stage punctuation prediction.
contrasting
train_14106
(statistical significance at p=0.01).
hard cascade has a higher upper-bound than soft cascade.
contrasting
train_14107
And we also obtain the same conclusion as that mixed-label LCRF performs better than isolated prediction.
for the comparison between the 2-layer FCRF and the cross-product LCRF, although the 2-layer FCRF performs better than the cross-product LCRF on disfluency prediction, it does worse on punctuation prediction.
contrasting
train_14108
From the above comparisons, we can see that increasing the label granularity can greatly improve the accuracy of a model.
this may also increase the model complexity dramatically, especially when higher clique order is used.
contrasting
train_14109
Moreover, the 2-layer FCRF and the cross-product LCRF perform slightly better than the mix-label LCRF and the soft-cascade approach, suggesting that modelling at a finer label granularity is potentially beneficial.
the soft cascade approach is more efficient than the joint approach when a higher clique order is used.
contrasting
train_14110
In this work we chose one particular set of features U.
given the large body of research into NLP feature engineering (Jurafsky and Martin, 2009), this class is extensible beyond just this set, which makes it suitable for many other NLP applications.
contrasting
train_14111
Conventional word alignment methods allow discontinuous alignment, meaning that a source (or target) word links to several target (or source) words whose positions are discontinuous.
we cannot extract phrase pairs from this kind of alignments as they break the alignment consistency constraint.
contrasting
train_14112
They need manually aligned bilingual texts to train the model.
the manually annotated data is too expensive to be available for all languages.
contrasting
train_14113
For example, the Chi- nese word "shi 2 " 1 is aligned to the English words "was 4 " and "that 10 ".
these two English words are discontinuous, and we cannot extract the phrase pair "(shi, was)".
contrasting
train_14114
From the initial alignment, we can extract a hierarchical phrase pair "(dang X 1 shi, when X 1 )" from the discontinuous alignment of the English word "when".
the hierarchical phrase pair cannot be extracted from our refined alignment, because our method discards the link between the Chinese word "dang" and the English word "when".
contrasting
train_14115
(2014) also enforce agreement during decoding.
these agreement models do not take into account the difference in language pairs, which is crucial for linguistically different language pairs, such as Japanese and English: although content words may be aligned with each other by introducing some agreement constraints, function words are difficult to align.
contrasting
train_14116
Using these limited resources, it has been shown that taking the translation direction into account when training a statistical machine translation system can improve translation quality (Lembersky et al., 2013).
improving statistical machine translation using translation direction information has been limited by several factors.
contrasting
train_14117
Recently, syntactic information has helped significantly to improve statistical machine translation.
the use of syntactic information may have a negative impact on the speed of translation because of the large number of rules, especially when syntax labels are projected from a parser in syntax-augmented machine translation.
contrasting
train_14118
At first gloss, it might seem reasonable to perform significance testing in the following manner when an increase in correlation with human assessment is observed: apply a significance test separately to the correlation of each metric with human judgment, with the hope that the newly proposed metric will achieve a significant correlation where the baseline metric does not.
besides the fact that the correlation between almost any document-level metric and human judgment will generally be significantly greater than zero, the logic here is flawed: the fact that one correlation is significantly higher than zero (r(M new , H)) and that of another is not, does not necessarily mean that the difference between the two correlations is significant.
contrasting
train_14119
A source word is marked as beginning (ending) boundary if it is the first (last) word of a translation span.
a source span whose first and last words are both boundaries is not always a translation span.
contrasting
train_14120
Vaswani and colleagues (2013) propose a method for reducing the training cost of CSLM and apply it to SMT decoder.
they do not show their improvement for decoding speed, and their method is still slower than the n-gram LM.
contrasting
train_14121
Our method performs best when all 3 linguistic features described above are taken into account by the SMT system.
we also experimented with different combinations of those features in order to get some insight of the way each feature influences the translation quality.
contrasting
train_14122
Ngrams, the core of BLEU, are sparse at the sentence level, and a mismatch for longer ngrams implies that BLEU falls back on shorter ngrams.
mETEOR has a trainable model and incorporates a small, yet wider set of features that are less sparse than ngrams.
contrasting
train_14123
Our use of morphological and lexical features overlaps with the AMEANA framework.
we extend our partial matching to a supervised tuning framework for estimating the value of partial credits.
contrasting
train_14124
However, the sentence is the 4 th ranked by annotators.
the output of Sys 3 (ranked 1 st by annotators) has only one exact match, but several partial matches when morphological and lexical information are taken into consideration.
contrasting
train_14125
Many statistical models for natural language processing exist, including context-based neural networks that (1) model the previously seen context as a latent feature vector, (2) integrate successive words into the context using some learned representation (embedding), and (3) compute output probabilities for incoming words given the context.
brain imaging studies have suggested that during reading, the brain (a) continuously builds a context from the successive words and every time it encounters a word it (b) fetches its properties from memory and (c) integrates it with the previous context with a degree of effort that is inversely proportional to how probable the word is.
contrasting
train_14126
Therefore it seems reasonable that the hidden layer is not only related to the activity when the word is on the screen, but also related to the activity before the word is presented, which is the time when the brain is integrating the previous words to build that context.
as the word i and subsequent words are integrated, the context starts diverging from the context of word i (computed before seeing word i).
contrasting
train_14127
However, to date, these previous approaches to multi-modal concept learning focus on concrete words such as cat or dog, rather than abstract concepts, such as curiosity or loyalty.
differences between abstract and concrete processing and representation (Paivio, 1991;Hill et al., 2013;Kiela et al., 2014) suggest that conclusions about concrete concept learning may not necessarily hold in the general case.
contrasting
train_14128
Our model is also marginally inferior to alternative approaches in learning representations of ab-stract nouns.
in this case, no method improves on the linguistic-only baseline.
contrasting
train_14129
When considering the node "is" in the word sequence, it is likely to be corrected into "are" because it appear directly after the plural noun "parents".
by the definition above, the subsequence corresponding to the node "damaged" is "car is damaged by ".
contrasting
train_14130
Early grammatical error correction systems use the knowledge engineering approach (Murata and Nagao, 1994;Bond and Ikehara, 1996;Heine, 1998).
manually designed rules usually have exceptions.
contrasting
train_14131
A number of recent works have applied modern machine learning techniques to SCF induction, including point-wise co-occurrence of arguments (Debowski, 2009), a Bayesian network model (Lippincott et al., 2012), multi-way tensor factorization (Van de Cruys et al., 2012) and Determinantal Point Processes (DPPs) -based clustering (Reichart and Korhonen, 2013).
all of these systems induce type-level SCF lexicons and, except from the system of (Lippincott et al., 2012) that is not capable of learning traditional SCFs, they all rely on supervised parsers.
contrasting
train_14132
The prior for all θ, p(θ), is the product of all Dirichlet distributions over all non-terminals A ∈ N : Since the Dirichlet distribution is conjugate to the Multinomial distribution, which we use to model the likelihood of trees, the conditional posterior of θ A can be updated as follows: which is still a Dirichlet distribution with updated parameter f r (t) + α r for each rule r ∈ R. Gibbs sampler The parameters of the PCFG model can be learned from an annotated corpus by simply counting rules.
parsing cannot be done directly with standard CKY as with standard PCFGs, so we use the Gibbs sampling algorithm presented in Johnson et al.
contrasting
train_14133
The edges that are connected to the OOV neighbor "w" have smaller edge weights such as 3, 5, and 26.
the edges that are connected to common words have higher weights.
contrasting
train_14134
The external score favors the well known interpretations of common OOV words.
unlike the dictionary based methodologies, our system does not return the corresponding unabbreviated word in the slang dictionary or in the transliteration table directly.
contrasting
train_14135
Using only lexSimScore the system achieved an F-measure of 28.24% on the LexNorm1.1 dataset and 38.70% on the Trigram dataset, which shows that lexical similarity alone is not enough for a good normalization system.
the externalScore which is the layer that is more aware of the Internet jargon, along with some social text specific rule based transliterations performs better than expected on both datasets.
contrasting
train_14136
For FH, not surprisingly, we could find matches for 99 of the 100 attributes.
for LT, only 31 of the 100 attributes could be found, even under our permissive setting.
contrasting
train_14137
Han14, Roller12 and WB11 follow this strategy, using KL divergence in preference to Naive Bayes.
we find that Naive Bayes in conjunction with Dirichlet smoothing (Smucker and Allan, 2006) works at least as well when appropriately tuned.
contrasting
train_14138
Currently, an SRL system works as follows: first identify argument candidates and then perform classification for each argument candidate.
this process only focuses on one independent predicate without considering the internal relations of multiple predicates in a sentence.
contrasting
train_14139
A noun phrase may be labeled as A0 for a predicate and at the same time, it can be labeled as A1 for another predicate.
there are few cases that a noun phrase is labeled as A0 for a predicate and as AM-ADV for another predicate at the same time.
contrasting
train_14140
explored joint syntactic and se-mantic parsing of Chinese to further improve the performance of both syntactic parsing and SRL.
to the best of our knowledge, in the literatures, there is no work related to multipredicate semantic role labeling.
contrasting
train_14141
WordNet frames potentially allow a shallow type pruning based on the semantics provided for the clause constituents.
we could solely distinguish people ("somebody") from things ("something"), which is too crude to obtain substantial pruning effects.
contrasting
train_14142
With enough training data, one could hope to learn the details of the interactions of various resolutions.
the expense of producing or obtaining supervised training data at multiple resolutions is prohibitive.
contrasting
train_14143
We want a tight correspondence because loose, overlapping alignments are not semantically satisfying.
we do not want to under associate: human language makes reference at a variety of levels (the word level, the phrase level, the utterance level, and beyond).
contrasting
train_14144
To optimize Equation 2 it is not practical to search the space of possible S, E combinations (this space is combinatorially large).
we can optimize the factored form using dynamic programming.
contrasting
train_14145
Commentators often describe facts about players or the weather or previous games which have no extension in the current game.
our system cannot distinguish such language from the language referring to this game.
contrasting
train_14146
The Baseline lemma assigns the domain by taking into account every WN Domain associated to each lemma.
the Baseline wsd selects only the WN Domain of sense disambiguated lemmas.
contrasting
train_14147
The higher the threshold, the more high-frequency verbs will prevail in the thesauri, for which the WordNet path similarities are higher.
when adopting a relevance filter of keeping the p most relevant contexts for each verb (Figure 1 right), we obtain similar results, but more stable thesauri.
contrasting
train_14148
The use of both filtering methods results in thesauri in which the neighbors of target verbs are closer in WordNet and get better scores in TOEFL-like tests.
the fact that filtering contexts with frequency under th removes verbs in the final thesaurus is a drawback, as highlighted in the extrinsic evaluation on the WBST task.
contrasting
train_14149
We therefore follow syntax-based SMT custom and use string/string alignment models in aligning our graph/string pairs.
while it is straightforward to convert syntax trees into strings data (by taking yields), it is not obvious how to do this for unordered AMR graph elements.
contrasting
train_14150
Thus a common strategy in SRL systems, formulated by Xue and Palmer (2004), is to look for arguments in the ancestors of the predicate and their direct descendants.
in Czech and Japanese data we observe a large portion of paths with two or more descending arcs, which makes it difficult to characterize the syntactic scope in which arguments are found.
contrasting
train_14151
We observe that arcfactored models are in fact more restricted, with a drop in accuracy with respect to unrestricted models.
we also observe that our method largely improves the robustness of the arc-factored method when training with a degree of syntactic variability.
contrasting
train_14152
This method has to be evaluated with the Kullback-Liebler divergence metric for each topic space.
this process would be time consuming for thousands of representations of a dialogue.
contrasting
train_14153
Deceptive reviews detection has attracted significant attention from both business and research communities.
due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging.
contrasting
train_14154
Shell-nounhood is a well-established concept in linguistics (Vendler, 1968;Ivanic, 1991;Asher, 1993;Francis, 1994;Schmid, 2000, inter alia).
understanding of shell nouns from a computational linguistics perspective is only in the preliminary stage.
contrasting
train_14155
This method applies to broad range of math problems, including multiplication, division, and simultaneous equations, while ARIS only handles arithmetic problems (addition and subtraction).
our empirical results show that for the problems it handles, ARIS is much more robust to diversity in the problem types between the training and test data.
contrasting
train_14156
For instance, the path from cat to animal traverses six intermediate nodes, naïvely yielding a prohibitive search depth of 6.
many of these transitions have low weight: for instance f ↑ cat→feline is only 0.37.
contrasting
train_14157
At this point, we have a confidence that the given path has not violated strict Natural Logic.
to translate this value into a probability we need to incorporate whether the inference path is confidently valid, or confidently invalid.
contrasting
train_14158
Ravi and Knight (2011) apply Bayesian learning to reduce the space complexity.
bayesian decipherment is still very slow with Gibbs sampling (Geman and Geman, 1987).
contrasting
train_14159
Previous work has tried to make decipherment scalable (Ravi and Knight, 2011;Dou and Knight, 2012;Nuhn et al., 2013;Ravi, 2013).
all of them are designed for decipherment with either Bayesian inference or beam search.
contrasting
train_14160
The first step is straightforward to implement.
it is not trivial to implement the second step.
contrasting
train_14161
2 It is common to use these domain-focused models as additional features besides the domain-confused features.
here we are more interested in replacing the domain-confused features rather than complementing them.
contrasting
train_14162
(2009) who estimate the former using meta-information over documents as main features.
our work overcomes the mutual dependence of sentence and phrase estimates on one another by training both models in tandem.
contrasting
train_14163
If the model score of each translation is taken to be the sum of rule scores independently given to each rule, the search for the optimal translation is easy with some classic dynamic programming techniques.
if the model score is going to take into account informations such as the language model score of each sentence, it cannot be expressed in such a way.
contrasting
train_14164
The alignment (and the linguistic structure of the phrase in the case of Syntax-Based Machine Translation) is then used to produce the target-side rule.
it is often the case that it is difficult to fully specify a rule from an example.
contrasting
train_14165
It is interesting to note that, on "fairground" comparison, that is if our decoder do not have the benefit of a more compact latticerule representation, it actually perform quite worse as we can see by comparing with the third column of table 1 (at least in term of decoding time and memory usage, while it would still have a very slight edge in term of model score with the selected settings).
the K-decoder is a rather strong baseline, shown to perform several times faster than a previous state-of-the-art implementation in (Heafield et al., 2013).
contrasting
train_14166
The merit of their approach is that they can apply minimization globally, allowing for more possibilities for vertex merging.
for large grammars, the "top-level lattice" will be huge, creating the need to prune vertices during the construction.
contrasting
train_14167
All of the previously cited approaches either use uniform weights for combination, or select weights based on collection-level information.
as stated previously, numerous studies suggest that certain methods work better on certain queries, collections, languages.
contrasting
train_14168
A recent study extends this idea to the cross-lingual case, by learning how to weight each translated word for English-Persian CLIR (Azarbonyad et al., 2013).
we extract translated word weights from diverse and sophisticated translation methods, then learn how to weight each translated structured query, We call this "learning-totranslate" (LTT), which can be formulated as a simpler learning problem.
contrasting
train_14169
Instead of weighting, the translations with highest classifier scores were concatenated, yielding statistically significant improvements over using the single-best translation method.
the translation methods explored in this paper are all based on one-best MT systems, making it difficult to draw strong conclusions.
contrasting
train_14170
As it can be observed, in both cases the runtime is linear in the number of components.
the SVD computation in the BoS setting is one order of magnitude faster than time performance in the BoW setting.
contrasting
train_14171
These KBs, such as FREEBASE (Bollacker et al., 2008) encompass huge ever growing amounts of information and ease open QA by organizing a great variety of answers in a structured format.
the scale and the difficulty for machines to interpret natural language still makes this task a challenging problem.
contrasting
train_14172
Replacing C 2 by C 1 induces a large drop in performance because many questions do not have answers thatare directly connected to their inluded entity (not in C 1 ).
using all 2-hops connections as a candidate set is also detrimental, because the larger number of candidates confuses (and slows a lot) our ranking based inference.
contrasting
train_14173
We chose a supervised machine learning approach in order to achieve maximum precision.
this problem can also be approached in an unsupervised setting, similar to the method Whitelaw et al.
contrasting
train_14174
The model we have just described considers each sentence in a quiz bowl question independently.
previously-heard sentences within the same question contain useful information that we do not want our model to ignore.
contrasting
train_14175
The bag of words model guesses Henry Clay, who was also a Secretary of State in the nineteenth century and helped John Quincy Adams get elected to the presidency in a "corrupt bargain".
the model can reason that while Henry Clay was active at the same time and involved in the same political problems of the era, he did not represent the Amistad slaves, nor did he negotiate the Treaty of Ghent.
contrasting
train_14176
More recent factoid qa systems incorporate the web and social media into their retrieval systems (Bian et al., 2008).
to these approaches, we place the burden of learning answer types and patterns on the model.
contrasting
train_14177
The syntaxbased (grammar formalism) approaches such as Combinatory Categorial Grammar (CCG) may experience errors if a question has grammatical errors.
our bag-of-words model-based approach can handle any question as long as the question contains keywords that can help in understanding it.
contrasting
train_14178
The major challenge of this task is to resolve the ambiguity of phrases, and recent work makes use of various kinds of information found in the document to tackle the challenge.
to this body of work, here we focus on the special case of Wikifying Wikipedia articles, instead of general documents.
contrasting
train_14179
Generating gold standard summaries is expensive and time-consuming, a problem that persists with cross-language query biased summarization because those summaries must be query biased as well as in a different language from the source documents.
extrinsic metrics measure the quality of summaries at the system level, by looking at overall system performance on downstream tasks (Jing et al, 1998;Tombros and Sanderson, 1998).
contrasting
train_14180
We have approached cross-language query biased summarization as a stand-alone problem, treating the CLIR system and document retrieval as a black box.
summaries need to preserve query-salience: summaries should not make it more difficult to positively identify relavant documents.
contrasting
train_14181
Our baseline results table (Table 1) shows REAPER outperforming ASRL in a statistically significant manner on all three ROUGE metrics in question.
we can see from the absolute differences in score that very few additional important words were extracted (ROUGE-1) however REAPER showed a significant improvement in the structuring and ordering of those words (ROUGE-2, and ROUGE-L).
contrasting
train_14182
Our result (R-2) is statistically significantly (p < 0.05) better than TAC11 Best system, but not statistically (p > 0.05) significantly different from (Li et al., 2013a).
for the grammar and coherence score, our results are statistically significantly (p < 0.05) than (Li et al., 2013a).
contrasting
train_14183
In our preliminary experiments, we used the default similarity threshold 0.7, which was found empirically by the MEAD developers for English.
it produced poor results on the Turkish data set.
contrasting
train_14184
Our results show that simple fixed-length truncation methods with high limits (such as taking the first 10 letters) improves summarization scores.
to our expectation, using morphological analysis does not enhance Turkish MDS, possibly due to the homogeneousness of the documents in a cluster to be summarized.
contrasting
train_14185
KS14 performs worse with tensor-based methods here than in the other vector spaces.
gS11 and NWE, except copy subject for both of them and Frobenius multiplication for NWE, improved over their verb-only baselines.
contrasting
train_14186
On large-scale tasks, neural vectors are more successful than the co-occurrence based alternatives.
this study does not reveal whether this is because of their neural nature, or just because they are trained on a larger amount of data.
contrasting
train_14187
The choice of compositional operator (tensorbased or a simple point-wise operation) depends strongly on the task and dataset: tensor-based composition performed best with the verb disambiguation task, where the verb senses depend strongly on the arguments of the verb.
it seems to depend less on the nature of the vectors itself: in the disambiguation task, tensor-based composition proved best for both co-occurrencebased and neural vectors; in the sentence similarity task, where point-wise operators proved best, this was again true across vector spaces.
contrasting
train_14188
Experimental results also verify the success of the cube activation function empirically (see more comparisons in Section 4).
the expressive power of this activation function is still open to investigate theoretically.
contrasting
train_14189
Figure 6 gives the visualization of three sampled features, and it exhibits many interesting phenomena: • Different features have varied distributions of the weights.
most of the discriminative weights come from W t 1 (the middle zone in Figure 6), and this further justifies the importance of POS tags in dependency parsing.
contrasting
train_14190
Motivated by this observation, we hypothesize that the reasons mentioned in the preceding post could be useful for predicting the reasons in the current post.
none of the models we have presented so far makes use of the reasons predicted for the preceding post.
contrasting
train_14191
Specifically, in 51-54% of the erroneous cases, a reason sentence is misclassified as NONE.
23-30% of the cases are concerned with assigning a reason label to a NONE sentence.
contrasting
train_14192
Yeh and Chen (2007) hand-engineered a set of rules for ZP resolution based on Centering Theory (Grosz et al., 1995).
virtually all recent approaches to this task are based on supervised learning.
contrasting
train_14193
We attribute this to the fact that almost all candidate antecedents are singular.
when we ablate any of the remaining three attributes, performance drops significantly by 2.3−3.0% in overall F-score.
contrasting
train_14194
We will compare against our implementation of F&S, adapted to English.
unlike F&S or other previous approaches to sentence fusion, the sentence enhancement algorithm may also avail itself of the dependency parses of all of the other sentences in the source text, which expands the range of possible sentences that may be produced.
contrasting
train_14195
We find that for a good template like "holiday in [country]", we can often find at least one cluster (one of the country clusters in this example) which has hypernym h and also contains many elements in V .
for invalid templates like "holiday of [book]", every cluster having hypernym h (="book" here) only contains a few elements in V .
contrasting
train_14196
For example, given two triples R(Al-Qaeda, attack, American) and R(Terrorist group, attack, American), a taxonomic relation Terrorist group ≫ Al-Qaeda can be induced.
it is not always guaranteed to induce a taxonomic relations from such a pair of triples, for example from R(animal, eat, meat) and R(animal, eat, grass).
contrasting
train_14197
Referential taxonomy structures such as WordNet or OpenCyc are widely used in semantic analytics applications.
their coverage is limited to common well-known areas, and many specific domains like Terrorism and AI are not well covered in those structures.
contrasting
train_14198
Min and Grishman categorise the slot fills found by human annotators but not found in the aggregated output of all systems.
this approach only allows them to hypothesise the likely source of recall loss.
contrasting
train_14199
Using COREF NNP as the sentence filter loses 2% recall, to an upper bound of 78%, for a 12% reduction in the search space.
using a full coreference system generates may more candidates than using simple NNP coreference.
contrasting