id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_11000
To answer this question, we apply both M 1 and M 2 to generate additional training instances, using a random selection of same-stance authors in place of M 2 's k-nearest neighbor method.
neither method yields an improvement in performance over the method on which it is based.
contrasting
train_11001
In Post 1, the author is anti-abortion, whereas in Post 4, the author is pro-abortion.
the first sentence in Post 1 gives a misleading clue about the author's stance, and so do the first two sentences in Post 4.
contrasting
train_11002
In this post, the author supports the legalization of marijuana.
the only useful hints about her stance are "marijuana should at least be decriminalized" and "weed can't kill you".
contrasting
train_11003
In Pn, a pro-Obama author explains why she thinks abortion is not wrong.
without the context from P1 that Obama is pro-abortion, it is not easy for a machine to classify Pn correctly.
contrasting
train_11004
Such databases are accurate, but as they broaden their scope they become increasingly incomplete.
to extending such a database, we present a system to query whether it contains an arbitrary fact.
contrasting
train_11005
For metrics which require a probability distribution, we pass the vectors through a sigmoid to obtain if we know both that (Greeks, are, mortal) and (men, are, mortal).
since the number of similar facts is likely to be small relative the number of candidate facts considered, this approach has the risk of losing the signal in the noise of uninformative candidates.
contrasting
train_11006
Coreference resolution systems can benefit greatly from inclusion of global context, and a number of recent approaches have demonstrated improvements when precomputing an alignment to external knowledge sources.
since alignment itself is a challenging task and is often noisy, existing systems either align conservatively, resulting in very few links, or combine the attributes of multiple candidates, leading to a conflation of entities.
contrasting
train_11007
The presence of spurious ambiguity causes this search space to be a directed graph rather than a tree, which considerably complicates the search, so spurious ambiguity was avoided whenever possible.
we claim that non-monotonicity and spurious ambiguity are not disadvantages in a modern statistical parsing system such as ours.
contrasting
train_11008
Instead, we want to learn a model that will offer its best prediction of Shift vs. Right-Arc, which we expect to usually be correct.
in those cases where the model does make the wrong decision, it should have the ability to later over-turn that decision, by having an unconstrained choice of Reduce vs. Left-Arc.
contrasting
train_11009
The MCMC based algorithms do not make any assumptions at all, and they can converge to the true posterior, either in joint or collapsed space as shown in Figure 1(b), 1(c).
one needs to have experience about the number of samples to be collected and the burn-in period.
contrasting
train_11010
The part of speech tagger has similar architecture to the one used for training the embeddings.
we have changed some of the network parameters, specifically, we use a hidden layer of size 300 and learning rate of 0.3.
contrasting
train_11011
Also, they have the advantage that they can be adapted to a specific task, as long as a large enough amount of parallel training data is available in order to adequately train the parameters of the Machine Translation system.
obtaining this task-specific training data by translating the original data by hand is very expensive and time-consuming.
contrasting
train_11012
Although these translators can generate many errors, they are an interesting way to obtain several hypotheses for a translation without much effort.
the use of these translators at testing time is not very convenient due to the fact that the system would depend on the Internet connection and the reaction time of the corresponding web pages.
contrasting
train_11013
It should be noted that by means of this process, the learned translator can represent and modelize the variability generated by the different translators.
due to the difficulty of the problem, this modelization may not be enough.
contrasting
train_11014
We think that separately processing the n-best translated sentences (for each input sentence) generated by the translator is not the best solution.
it would be better to adequately combine segments of different sentences.
contrasting
train_11015
It can be seen in Figures 3 and 4 that, for Configuration 2, when n takes the value 18, both error measures descend.
after this, the errors continue with their ascending tendency.
contrasting
train_11016
The use of pivot languages and wordalignment techniques over bilingual corpora has proved an effective approach for extracting paraphrases of words and short phrases.
inherent ambiguities in the pivot language(s) can lead to inadequate paraphrases.
contrasting
train_11017
Equation 1 allows paraphrases to be extracted by using multiple pivot languages such that these languages help discard inadequate paraphrases resulting from ambiguous pivot phrases.
in this formulation all senses of the input phrase are mixed together in a single distribution.
contrasting
train_11018
a noun) are paraphrased by words with other categories (e.g., a verb).
the approach does not solve the more complex issue of polysemous paraphrases: words with the same category but different meanings, such as the noun bank as financial institution and land alongside a river/lake.
contrasting
train_11019
It mixes valid senses of forma and (correctly) proposes the paraphrases manera and modo for sense (a), and tipo for sense (c).
paraphrases for sense (b) are over penalised and account for very little of the probability mass of the candidate paraphrases of forma.
contrasting
train_11020
Like in (Bannard and Callison-Burch, 2005), we use the Spanish WordNet 3 to bias our selection of phrases to paraphrase to contain ambiguous cases.
rather than biasing selection towards having more multi-word expressions, we chose to have more polysemous cases.
contrasting
train_11021
Note that CCB relies solely on the LM component to fit the paraphrase candidate to the context.
ccB-wsd and multi both have access to sense annotation, but while multi is able to benefit from multiple pivot languages, ccB-wsd can only pivot through the one English phrase provided as sense annotation.
contrasting
train_11022
On the one hand there is a drop in precision of about 9% for correctness with multi.
there is an improvement in recall: multi improves from 3% (top-1 guess) to 12% (top-3 guesses).
contrasting
train_11023
For example, under the simple model p(w f |w e ), the English word "free" may be translated into the Japanese word 自 由 (as in free speech) or 無 料 (as in free beer) with equal 0.5 probability; this low probability may cause both translation pairs to be rejected by the dictionary extraction algorithm.
given p(w f |w e , t), where t is "politics" or "shopping", we can allow high probabilities for both words depending on context.
contrasting
train_11024
We have seen how topic-dependent translation models p(w f |w e , t k ) is important in achieving good results.
eq.2 marginalizes over the topics so we do not know what topic-dependent lexicons are learned.
contrasting
train_11025
We address aspect detection at the mention level and our methods fall into the category of (unsupervised) lexicon-based approaches.
to supervised methods, lexicon-based approaches do not rely on labeled training data and thus scale better across domains 2 .
contrasting
train_11026
Popescu and Etzioni (2005) compare their results to the method by Hu and Liu (2004) and report significantly improved results.
their method relies on the private "Know-it-all" information extraction system and is therefore not suited as a baseline.
contrasting
train_11027
", according to the gold-standard annotations, 'system has' is corrected to 'systems have'.
this module keep the original word because of the 3rd person singular present verb, 'has'.
contrasting
train_11028
Besides, we found that applying the dependency criteria and moving window method in parallel leads to high recall but low precision.
the moving window method often fails because of insufficient evidence.
contrasting
train_11029
This can be detected and recovered by using a language model.
sVA is more complicated and it is more effective to determine the mistakes by using the linguistic and grammatical rules.
contrasting
train_11030
Note that there is spurious ambiguity in the model at two levels: Firstly, it is possible that there can be different derivations for the same tree-string pair.
during the application of the model this ambiguity occurs infrequently.
contrasting
train_11031
The heuristic pruning may undermine some of the advantages our model might have in taking whole sentence analyses into account to generate error corrections.
we find that despite this, the model is still able to generate hypothesis corrections that take non-local dependencies into consideration.
contrasting
train_11032
2011for correcting Japanese as a second language.
their training corpus comprised authentic learner sentences together with corrections made by native speakers on a social learning network website.
contrasting
train_11033
The original NUCLE corpus contains corrections for 27 error types.
the version used for the shared task only includes 5 error types and discards all the remaining corrections.
contrasting
train_11034
• Presence/value of the determiner in the noun group.
this is only a secondary cue, since it is not possible to determine if it is the determiner or the noun-number that is incorrect (e.g.
contrasting
train_11035
Absence of a determiner is indicated by a special class label NO DET.
since the number of determiners is large, a single multi-class classifier will result in ambiguity.
contrasting
train_11036
In English the verbs and their subjects have no fixed positions; in indicative sentences the verb most of the times (not immediately) but follows the subject, although not necessarily, e.g.
in sentences with expletives the subject follows the verb: there/EXPL are/VERB still many problems/SUBJ hampering engineering design process for innovations.
contrasting
train_11037
This should not be a big problem as the classifiers are called separately for their particular part-of-speech category (determiner, preposition, verb, or noun).
this puts a lot of weight on the part of speech tagger.
contrasting
train_11038
An example of this is the sentence: Take Singapore for example , these are installed... Take Singapore for example , surveillance is installed...
the replacement of these with surveillance is not in the task, so to get it correct, a system would have to hypothesize: Take Singapore for example , these is installed...
contrasting
train_11039
We achieved precision of 35.65%, recall of 16.56% and F 1 of 22.61% in the official score of our submitted result.
it was far from satisfactory mainly due to the ill settings of confidence parameters.
contrasting
train_11040
The subject of the verb show can be traced through R-A0 -> A0.
the performance of this part is partly correlated with the noun form that may have errors in the original text and the wrong SRL result brought about because of wrong sentence grammars.
contrasting
train_11041
Through experiments on a few sample ratios, we notice that feature selection using genetic algorithm is able to reduce the feature dimensionality to about 170,000 which greatly lowers down the downstream computational complexity.
the improvement contributed by GA after confidence tuning is not obvious as that before confidence tuning.
contrasting
train_11042
As the number of English learners is increasing world widely, the research topic of automated grammar error correction is lively discussed.
automated grammar error correction is a very difficult field and the result is not satisfactory.
contrasting
train_11043
To improve low recall of Han's method, to con-struct large training data is the best way.
it is very costly and hard work to obtain well edited error tagged corpus.
contrasting
train_11044
In several subfields of NLP, we have various evaluation metrics.
if a system A is reported to be better than a system B with respect to some metric M 1 , it need not be better with respect to some other metric M 2 .
contrasting
train_11045
This is similar to our proposed work in terms of transliteration and language modeling.
darwish (2013) does not target a conventionalized orthography, while our system targets COdA.
contrasting
train_11046
Additionally, there is some work on converting from dialectal Arabic to MSA, which is similar to our work in terms of processing a dialectal input.
our final output is in EGY and not MSA.
contrasting
train_11047
This work is similar to ours in terms of text transliteration.
our work is not restricted to names.
contrasting
train_11048
Similarly, an approach which treats the whole predicate-argument structure as an atomic unit (Regneri et al., 2010) will probably fail as well, as such a sparse model is unlikely to be effectively learnable even from large amounts of unlabeled data.
our embedding method would be expected to capture relevant features of the verb frames, namely, the transitive use for the predicate disembark and the effect of the particle away, and these features will then be used by the ranking component to make the correct prediction.
contrasting
train_11049
First of all, the SENNA embeddings tend to place antonyms / opposites near each other (e.g., come and go, or end and start).
'opposite' predicates appear in very different positions in scripts.
contrasting
train_11050
Surprisingly CJ08 rules produce as good results as BL, suggesting that maybe our learning set-ups are not that different.
an interesting question is in which situations using a more expressive model, EE, is beneficial.
contrasting
train_11051
This intuition is used by directional measures such as ClarkeDE, W eedsP rec and BalAP Inc.
we found that many features of the narrower term are often highly specific to that term and do not generalise even to hypernyms.
contrasting
train_11052
The reported MAP values are very low -this is due to many rare Word-Net hyponyms not occurring in the candidate set, for which all systems are automatically penalised.
this allows us to evaluate recall, making the results comparable between different systems and background datasets.
contrasting
train_11053
Lexicons are a simple yet powerful way to provide task-specific supervisory information to the model without the burden of labeling additional data.
while lexicons have proven useful in various NLP tasks, a small amount of noise in a lexicon can severely impair the its usefulness as a feature in log-linear models.
contrasting
train_11054
It may also be perceived as capturing an unsupervised knowledge representation schema, complementing supervised knowledge bases such as Freebase (Bollacker et al., 2008), as suggested by Riedel et al (2013).
language variability obstructs open IE from becoming a viable knowledge representation framework.
contrasting
train_11055
Also related, Riedel et al (2013) try to generalize over open IE extractions by combining knowledge from Freebase and globally predicting which unobserved propositions are true.
our work identifies inference relations between concrete pairs of observed propositions.
contrasting
train_11056
In a bootstrapped system, where the data is not fully labeled, existing systems score patterns by either ignoring the un-labeled entities or assuming them to be negative.
these scoring schemes cannot differentiate between patterns that extract good versus bad unlabeled entities.
contrasting
train_11057
Current pattern learning systems would score both patterns equally.
features like distributional similarity can predict 'cat' to be closer to {dog} than 'car', and a pattern learning system can use that information to rank 'Pattern 1' higher than 'Pattern 2'.
contrasting
train_11058
As discussed in the TAC 2011 pilot study by , there are situations that cannot be covered by this representation, such as recurring events, for ex-ample repeated marriages between two persons.
the most common situations for the relations covered in this task are captured correctly by this 4-tuple representation.
contrasting
train_11059
For example, from the text "they got married on Valentine's Day" a system can extract Valentine's Day as the surface form of the start of the per:spouse relation.
for a temporal scoping system it needs to normalize the temporal string to the date of February 14 and the year to which the document refers to explicitly in text or implicitly, such as the year in which the document was published.
contrasting
train_11060
To resolve them to actual dates/time is a non-trivial task.
the heuristic of employing the document's publication date as the reference works very well in practice e.g.
contrasting
train_11061
In addition, training can be performed on gold standard annotation.
model transfer assumes a common fea-ture representation across languages (McDonald et al., 2013), which can be a strong bottleneck.
contrasting
train_11062
The relation to annotation projection is obvious as both involve parallel data with one side being annotated.
the use of direct translation brings two important advantages.
contrasting
train_11063
In the following, we will compare results to our baseline as we have a comparable setup in those experiments.
most improvements shown below also apply in comparison with (McDonald et al., 2013).
contrasting
train_11064
Multi-source approaches are especially appealing using the translation approach.
initial experiments (which we omit in this presentation) revealed that simple concatenation is not sufficient to obtain results that improve upon the single-best translated treebanks.
contrasting
train_11065
Error Analysis Like POS-taggers, the learned supertagger frequently confuses nouns (N) and their modifiers (N/N), but the most frequent error made by the English (6) experiment was (((S\NP)\(S\NP))/N) instead of (NP nb /N).
these are both determiner types, indicating an interesting problem for the supertagger: it often predicts an object type-raised determiner instead of the vanilla NP/N, but in many contexts, both categories are equally valid.
contrasting
train_11066
In this work, as in most type-supervised work, the tag dictionary was automatically extracted from an existing tagged corpus.
a tag dictionary could instead be automatically induced via multi-lingual transfer (Das and Petrov, 2011) or generalized from human-provided information .
contrasting
train_11067
In step 1, as the jump factor 1 is dropped, we do not know the orientation between bǎ and tā.
several jump distances are known: from X 1 to bǎ is distance -2 and tā to kǎolv jìnqù is 2.
contrasting
train_11068
Accordingly our model supports rules than cannot be represented by a 2-SCFG (e.g., step 3 in Figure 5 requires a 4-SCFG).
the hierarchical phrase-based model allows only 2-SCFG as each production can rewrite as a maximum of two nonterminals.
contrasting
train_11069
It makes use of the decoding mechanism of the phrase-based model which jumps over the source words and hence can hold discontinuous phrases naturally.
their method doesn't touch the correlations between phrases and probability modeling which are the key points we focus on.
contrasting
train_11070
The jump factor gives big improvements of about 1% BLEU in both language pairs.
when using parallel backoff, the performance improves greatly for Chinese-English but degrades slightly on Arabic-English.
contrasting
train_11071
Our approach uses minimal phrases as its basic unit of translation, in order to preserve the manyto-many links found from the word alignments.
we now seek to assess the impact of the choice of these basic units, considering instead a simpler word-based setting which retains only 1to-1 links in a Markov model.
contrasting
train_11072
We could easily use larger phrase pairs as the basic unit, such as the phrases used during decoding.
doing this involves a hard segmentation and would exacerbate issues of data sparsity.
contrasting
train_11073
There are about 83 million unique phrases up to length three in the English Wikipedia.
we ignore target phrases that appear fewer than three times, reducing this set to 10 million English phrases.
contrasting
train_11074
The phrase was no one is composeable from había nadie given the seed model.
the phrase polling stations is composeable from centros electorales using induced translations.
contrasting
train_11075
The small model fails to align the adjoined clitic los with its translation them.
our loose definition of compositionality allows the English stop word them to appear anywhere in the target translation.
contrasting
train_11076
In the decipherment task, translation models are learned from comparable corpora without any parallel text (Ravi and Knight, 2011;Dou and Knight, 2012;Ravi, 2013).
we begin with a small amount of parallel data and take a very different approach to learning translation models.
contrasting
train_11077
In fact, both representations achieve the same accuracy on the SEMEVAL task.
there is a large performance gap in favor of the neural embedding in the open-vocabulary MSR and GOOGLE tasks.
contrasting
train_11078
Previous approaches use taskspecific information, by either relying on a (word-pair, connectives) matrix rather than the standard (word, context) matrix (Turney and Littman, 2005;Turney, 2006), or by treating analogy detection as a supervised learning task (Baroni and Lenci, 2009;Jurgens et al., 2012;Turney, 2013).
the vector arithmetic approach followed here is unsupervised, and works on a generic single-word representation.
contrasting
train_11079
This choice is inspired by recent work on learning syntactic categories (Yatbaz et al., 2012), which successfully utilized such language models to represent word window contexts of target words.
we note that other richer types of language models, such as class-based (Brown et al., 1992) or hybrid (Tan et al., 2012), can be seamlessly integrated into our scheme.
contrasting
train_11080
This would discriminate better between like and surround.
in this case sentences such as "Mary's son likes the school campus" and "John's son loves the school campus" will not provide any evidence to the similarity between like and love, since "Mary's son the school campus" is a different feature than "John's son the school campus".
contrasting
train_11081
The bottom-up hypothesis holds that infants converge onto the linguistic units of their language through a statistical analysis over of their input.
the top-down hypothesis emphasizes the role of higher levels of linguistic structure in learning the lower level units.
contrasting
train_11082
A possible reason for this is that some error types are more regular than others, and so in order to boost accuracy, simple rules can be written to make sure that, for example, the number of a subject agrees with the number of a verb.
it is a lot harder to write a rule to consistently correct Wci (wrong collocation/idiom) errors.
contrasting
train_11083
Finally, the inability of the M 2 Scorer to combine corrections from different annotators (as opposed to selecting only one annotator's corrections for the whole sentence) can also result in underestimations of performance.
it is clear that exploring these combinations during evaluation is a challenging task itself.
contrasting
train_11084
Tuning three to five times would require 24 to 40 tuning runs in our setup.
we already have eight parameter vectors obtained from distinct tuning sets and decide to average these parameters.
contrasting
train_11085
Our result with additional answers is 38.58%, we remain on third place after CUUI (45.57%) and CAMB (43.55%) which switched places.
we do not consider the evaluation on alternative answers to be meaningful as it is strongly biased.
contrasting
train_11086
Preposition (Prep) 5.43% I do not agree *on/with this argument that surveillance technology should not be used to track people .
word form (wform) 4.87% , the application of surveillance technology serves as a warning to the *murders/murderers and they might not commit more murder .
contrasting
train_11087
This correctiom improves the local coherency of the sentence.
the resulting construction is not consistent with "to almost luxury", suggesting a more complex correction (changing the word "become" to "are going").
contrasting
train_11088
Detection of errors can be achieved statistically and rule-based, the task of the hybrid approach being to resolve any conflicts that arise between the outputs of the two approaches.
even the most advanced systems are only able to distinguish between a limited number of error types and the task of correcting an error is even more difficult.
contrasting
train_11089
We adopted the former solution: we used Google 1T n-grams corpus (see section 2.2) from which the selectional restrictions can be learned quite successfully.
dealing with collocations is difficult, as correction does not involve only syntax, but also semantics.
contrasting
train_11090
The article by Adam Pauls and Dan Klein (2011) describes an ingenious way to create a data structure that reduces the amount of RAM needed to load the 1T corpus.
the system they propose is written in Java, a language that is objectoriented, and which, for any object, introduces an additional overhead.
contrasting
train_11091
The result of the system was not good or as we expected, first because our approach is simple and was motivated to test the use of a syntactic n-grams language model, second because the poor election of candidates to correct the errors.
this task gave us the opportunity to test the behaviour in different conditions and now we have a reference to improve our system.
contrasting
train_11092
But this method still needs a candidate generation mechanism for each error category.
the SMT based method (Brockett et al., 2006) formulates the grammar correction problem as a problem of translation of incorrect sentences to correct sentences.
contrasting
train_11093
Tuning to BLEU ensures that the parameter weights are set such that the fidelity of translations is high.
ensuring fidelity is not the major challenge in grammar correction since the meaning of most input sentences is clear and most don't have any grammatical errors.
contrasting
train_11094
Comparing the results of systems S1, S3 and S5, it is clear that using the SMT method alone gives the highest F-0.5 score.
the recall is higher for systems which use the custom modules for some error categories.
contrasting
train_11095
Tuning the SMT system to the F-β metric did not improve performance over the BLEU-based tuning.
we plan to further investigate to understand the reasons for this behaviour.
contrasting
train_11096
We use the Enchant Python Library to correct the spelling errors 1 .
using only one best result is not very accurate.
contrasting
train_11097
For example, in , the determinater errors are decomposed into: For a task with a few error types such as merely determinative and preposition error in HOO 2012, manually decomposition may be sufficient.
for CoNLL-2014, all 28 error types are required to be corrected and some of these types such as Rloc-(Local redundancy) and Um (Unclear meaning) are quite complex that the manual decomposition is time consuming and requires lots of grammatical knowledges.
contrasting
train_11098
In addition, dividing the error detection and correction into two steps alleviates the application of machine learning classifiers.
an approach that considers error types individually may have negative effects:  This approach assumes independence between each error type.
contrasting
train_11099
We summarize our conclusions in Section 5.
with phrase-based translation models, factored models make use of additional linguistic clues to guide the system such that it generates translated sentences in which morphological and syntactic constraints are met .
contrasting