id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_20800
This is because this corpus is not much bigger than the 5-gram language model built from it (at our current pruning level), and so the overhead of the more complex n-gram EM method is a net disadvantage.
when moving to larger corpora, the iterations of n-gram EM become as fast as standard EM and then faster.
contrasting
train_20801
This method works significantly better than standard perceptron, and is followed by later incremental parsers, for instance in (Zhang and Clark, 2008;Huang and Sagae, 2010).
two problems remain: first, up till now there has been no theoretical justification for early update; and secondly, it makes learning extremely slow as witnessed by the above-cited papers because it only learns on partial examples and often requires 15-40 iterations to converge while normal perceptron converges in 5-10 iterations (Collins, 2002).
contrasting
train_20802
A triple (x, y, z) is said to be a violation in training scenario S = D, Φ, C with respect to weight vector w if (x, y, z) ∈ C and w • ∆Φ(x, y, z) ≤ 0.
intuitively, this means model w is possible to mislabel example x (though not necessarily to z) since y is not its single highest scoring label under w. z ∈ Y(x) and z = y, so This lemma basically says exact search guarantees violation in each update, but as we will see in the convergence proof, violation itself is more fundamental than search exactness.
contrasting
train_20803
And human-assisted metrics (Snover et al., 2006) have enabled and supported large-scale U.S. government sponsored programs, such as DARPA GALE (Olive et al., 2011).
these metrics have started to show signs of wear and tear.
contrasting
train_20804
In CV annotation it is typical to remove infrequent terms from both the keyword vocabulary and the evaluation data because CV algorithms typically need a large number of examples to train on.
using NLP systems and baselines one can correctly annotate using keywords that did not appear in the training set.
contrasting
train_20805
It also fails to capture variation in how humans describe images, since it is limited to one caption per image.
3 captions are a cheap source of data; BBC has ten times as many images as UNT.
contrasting
train_20806
In the first part of our paper, we present classification experiments with newer MT metrics not available in 2005, a worthwhile exercise in itself.
we go much further in our study: • We apply our approach to two different paraphrase datasets (MSRP and PAN) that were created via different processes.
contrasting
train_20807
Similarly, to indicate locations, a preposition is normally required in English (e.g., 'on the hill').
in Classical Chinese, the preposition is frequently omitted, with the bare locative noun phrase modifying the verb directly.
contrasting
train_20808
Moreover, this approach allows for easy addition of new exercises : as long as an exercise relies on the concepts covered by the domain model, the system can apply standard instructional strategies to each new question automatically.
this approach is significantly limited by the requirement that the domain be small enough to allow comprehensive knowledge engineering, and it is very labor-intensive even for small domains.
contrasting
train_20809
For example, a likely response to "partially correct incomplete" would be to tell the student that what they said so far was correct but it had some gaps, and to encourage them to fill in those gaps.
the response to "contradictory" would emphasize that there is a mistake and the student needs to change their answer rather than just expand it.
contrasting
train_20810
This baseline is based on the lexical overlap baseline used in RTE tasks (Bentivogli et al., 2009).
we measured overlap with the question text in addition to the overlap with the expected answers.
contrasting
train_20811
Future systems developed for this task can benefit from the large amount of existing work on recognizing textual entailment (Giampiccolo et al., 2007;Giampiccolo et al., 2008;Bentivogli et al., 2009) and on detecting contradiction (Ritter et al., 2008;De Marneffe et al., 2008).
there are substantial challenges in applying the RTE tools directly to this data set.
contrasting
train_20812
's algorithm is as a form of structured ramp loss.
another interpretation is given by McAllester et al.
contrasting
train_20813
(depending on whether a cycle after the initial aaa has positive or negative weight), but never the optimal aaaaa.
if G instead encoded 5-grams, this would not be a problem because a path through a 5-gram machine could accept aaaaa without traversing a cycle.
contrasting
train_20814
Setting smaller batch size implies frequent updates to the parameters and a faster convergence.
as briefly mentioned in Haddow et al.
contrasting
train_20815
This starting output tree is typically the best parse of the string that we want to translate.
instead of a single tree, we want to use all parses of this sentence together with their parse scores.
contrasting
train_20816
Applied to a corpus of journalistic articles, CATIT was able to provide headings both informative and catchy.
syntactical patterns used for titles building were short (two terms) and experience showed that longer titles were often preferred.
contrasting
train_20817
Finally, the annotation also includes the appropriate usage rule from the set in Table 1.
3 the NUCLE and HOO data sets do not have this granularity of information (the annotation only indicates whether a comma should be inserted or removed) and are not exhaustively annotated.
contrasting
train_20818
The fact that the Chinese determiner is not necessarily a maximal projection of the noun -in other words, the determiner does not 'close off' a level of NPalso argues against importing the English analysis.
the English CCGbank determiner category NP/N reflects the fact that determiners 'close off' NP -further modification by noun modifiers is blocked after combining with a determiner.
contrasting
train_20819
The well-known noun/verb ambiguity in Chinese (where, e.g., 设计建设 'design-build' is both a verbal compound 'design and build' and a noun compound 'design and construction') greatly affects parsing accuracy (Levy and Manning, 2003).
little work has quantified the impact of noun/verb ambiguity on parsing, and for that matter, the impact of other frequent confusion types.
contrasting
train_20820
This is expected; Chinese CCGbank does not distinguish between noun modifiers (NN) and adjectives (JJ).
the critical noun/verb ambiguity, and the confusion between DEC/DEG (two senses of the particle 的 de) adversely impact F -score.
contrasting
train_20821
The creation of the dependency representation is similar in basic aspects to many other approaches, in that we utilize some basic assumptions about head relations to decompose the full tree into smaller units.
we first decompose the original trees into a Tree Insertion Grammar representation (Chiang, 2003), utilizing tree substitution and sister adjunction.
contrasting
train_20822
In this way, the dependency representation in Figure 3 follows immediately from Figure 2.
in addition, we utilize the TIG derivation tree and the structures of the elementary trees to create a supertag (in the sense discussed in Section 1) for each word.
contrasting
train_20823
100 is the head of the phrase $ 100 *U* in the PTB PS (a), as shown in the dependency structure (b).
because it only has one child in addition to the $, no additional QP node is created in the phrase structure representation in (c).
contrasting
train_20824
Our interpretation of this is that it provides an indication of what the parser is providing on top of the gold dependency structure, which is roughly the same information that we have encoded in our DS to PS code.
because the Wang and Zong (2010) system performs better than our USE-POS version, it is likely learning some of the nonstraightforward cases of how USE-POS tags can bootstrap the syntactic structure that our USE-POS version is missing.
contrasting
train_20825
6 We can see that using paraphrases improves the results over the unsupervised state of the art, regardless of which source of paraphrasing is used.
it is clear that not all types of paraphrases are equally helpful.
contrasting
train_20826
We can see that the results are much more stable with respect to recall -there is an initial drop in performance when we remove the first 10% of paraphrases, but after that removing more paraphrases does not affect performance very much.
changing the precision has a bigger impact on the results.
contrasting
train_20827
More and more language workers and learners use the MT systems on the Web for information gathering and language learning.
web translation systems typically offer top-1 translations (which are usually far from perfect) and hardly interact with the user.
contrasting
train_20828
As expected, the predicted ASR accuracy increases as EEG classification accuracy increases, for both groups (adults and children) and both levels of difficulty (easy and difficult).
figure 1a and 1b shows that WACC was much lower for children than for adults, especially on difficult utterances, where even 100% simulated EEG classifier accuracy achieves barely 20% WACC.
contrasting
train_20829
The fact that special characters and numbers behave similarly across languages is encouraging as one would expect less crosslinguistic variation for these two classes of words.
"true" words (those exclusively composed of alphabetic characters) show more variation from language to language: 0.03 ≤ ∆ W ≤ 0.12.
contrasting
train_20830
Most of the research on both G2P and MTL assumes the existence of a homogeneous training set of input-output pairs.
following the pivot approaches developed in other areas of NLP (Utiyama and Isahara, 2007;Cohn and Lapata, 2007;Wu and Wang, 2009;Snyder et al., 2009), the idea of taking advantage of other-language data has recently been applied to machine transliteration.
contrasting
train_20831
Such an approach has the advantage of being fast and not dependent on the training of any base system.
it achieves only 64.8% word accuracy, which is lower than any of the results in Table 4.
contrasting
train_20832
Especially derivational morphemes are hard to learn with pure data-driven methods with no knowledge about semantics and thus it can result in undersegmentation.
estonian corpus separates only inflectional morphemes which thus leads to higher recall.
contrasting
train_20833
An optimal application of the one-translation-per-discourse heuristic would thus group the rules based on the presence of one of those words.
in the C 1 variant, each of these rules would be counted separately because of differences that in some cases do not directly affect the choice of content words.
contrasting
train_20834
As a result, applying the one-translation-per-discourse heuristic improved the multi-reference BLEU score.
here is one of the cases where our feature hurt performance.
contrasting
train_20835
However, this requires physical presence of the two conversants in one location.
text chat between users over cell phones has become increasingly popular in the last decade.
contrasting
train_20836
The subjects were instructed to read the lines verbatim.
due to ASR errors, the subjects had to repeat or improvise few turns (about 10%) to sustain the dialog.
contrasting
train_20837
In some cases (32% , Table 4), coarse and fine-grained foci include the same words (e.g., It doesn't ::::::: always hurt [interpretation: it hurts sometimes]).
finegrained focus usually (68%) comprises fewer words.
contrasting
train_20838
Intuitively, ADVPs are fairly easy (they are short and coarse-grained and finegrained foci are often the same).
pp and SBAR are longer and only 44% and 32% of words belonging to the coarse grained focus belong to the fine-grained focus respectively.
contrasting
train_20839
Some of these differences are warranted in that certain target language phenomena are better captured by the native annotation.
differences such as choice of lexical versus functional head are more arbitrary.
contrasting
train_20840
It is their success that motivates building explicitly trained, linear-time pruning models.
while a greedy solution for arc-standard transition-based parsers can be computed in linear-time, Kuhlmann et al.
contrasting
train_20841
For example, a request for action is the first part of an adjacency pair and thus requires a response from the addressee, but declining the request is a valid response.
the utterer may formulate her request for action in a way that attempts to remove the option of declining it ("Come to my office now!").
contrasting
train_20842
Since SVMs optimize on training set accuracy to learn f , it performs better on balanced training sets.
our dataset is highly imbalanced (∼ 5% positive instances).
contrasting
train_20843
We then did cross validation for the ODP tagger using gold dialog acts for training and automatically tagged dialog acts for testing.
for our best performing feature set so far, this reduced the F score from 65.8 to 52.7.
contrasting
train_20844
Co-reference resolution has received a lot of attention.
as Eisenstein and Davis (2006) noted, most research on co-reference resolution has focused on written text.
contrasting
train_20845
Shallow-n grammars (de Gispert et al., 2010) were introduced to reduce over-generation in the Hiero translation model (Chiang, 2005) resulting in much faster decoding and restricting reordering to a desired level for specific language pairs.
shallow-n grammars require parameters which cannot be directly optimized using minimum error-rate tuning by the decoder.
contrasting
train_20846
The reordering glue rule facilitates reordering at the top-level.
this is still not sufficient to allow long-distance reordering as the shallow-decoding restricts the depth of the derivation.
contrasting
train_20847
In the computational linguistics community, several projects have attempted to determine the grade level of a text (2nd/3rd/4th/etc).
the education community typically makes finer distinctions in reading levels, with each grade being covered by multiple levels.
contrasting
train_20848
Document-only active learning also outperformed standard passive learning, which is consistent with previous work.
for Movie Reviews (bottom), there is little difference among the three settings, and in fact models trained with DUALIST appear to lag behind active learning with documents.
contrasting
train_20849
In NLP, unsupervised learning typically implies optimization of a "bumpy" objective function riddled with local maxima.
one exception is IBM Model 1 (Brown et al., 1993) for word alignment, which is the only model commonly used for unsupervised learning in NLP that has a concave loglikelihood function.
contrasting
train_20850
3 is non-concave due to the presence of a product within a log .
if the tag transition probabilities p(y j | y j−1 ) are all constants and also do not depend on the previous tag y j−1 , then we can rewrite Eq.
contrasting
train_20851
Applied to (2), these rules would attach the gesture to "books" (a prosodically prominent item), also to "other books", "give you other books", "can give you other books" and even to "I can give you other books" (heads saturated with their arguments).
nothing licenses attachments to "I" or "give".
contrasting
train_20852
While different schemes have been proposed for annotating citations according to their function (Spiegel-Rosing, 1977;Nanba and Okumura, 1999;Garzone and Mercer, 2000), the only recent work on citation sentiment detection using a relatively large corpus is by Athar (2011).
this work does not handle citation context.
contrasting
train_20853
They model each sentence as a node in a graph and experiment with various window boundaries to create edges between neighbouring nodes.
their dataset consists of only 10 papers and their annotation scheme differs from our four-class annotation as they do not deal with any sentiment.
contrasting
train_20854
(Bollacker, 2008) contains an extensive database of names and nicknames 2 , with listings on over 13,000 given names, containing multiple "variations" for each name.
this database makes no attempt to distinguish between common and less common variants and skips some very common nicknames.
contrasting
train_20855
Similarly, WordNet does not indicate "jaguar" could be related to "car" at all.
the "car" sense of "jaguar" dominates the vector created using the search engine.
contrasting
train_20856
Many of these systems used techniques that exploited the specific aspects of the task, e.g., German-specific morphological analysis.
we present a knowledge-impoverished, entirely data-driven approach, by simply looking for more data in large collections.
contrasting
train_20857
Even though these can make HMMs easier to train and scale than more structured models such as BNs, it also puts them in a disadvantage concerning context-awareness and accuracy as shown by our results.
the random variables of BNs allow them to keep a structured model of the space, user, and relevant content selection and utterance planning choices.
contrasting
train_20858
There has been a substantial amount of research effort devoted to user generated content-related search tasks, including blog search, forum search, and community-based question answering.
there has been relatively little research on mi-croblog search.
contrasting
train_20859
Traditional query expansion approaches typically find terms that commonly co-occur with the query terms in documents (or passages).
such approaches are not suitable for expanding queries in the microblog setting since microblog messages are very short, yielding unreliable co-occurrence information.
contrasting
train_20860
Using this definition, the coverage score of a time interval is computed as: where tf w i ,T S is the term frequency of w i in timespan T S and β w is the expansion weight of term w. Since multiple events may occur at the same time, microblog streams can easily be dominated by the larger of two events.
less popular events may also exhibit burstiness at the same time.
contrasting
train_20861
Public events, such as federal elections involve people across the country.
a car pileup typically only attracts local attention.
contrasting
train_20862
Linear CRF achieved an accuracy 0.87, which is higher than the baseline of majority class predictor (N, 0.80) (ttest, p = 10 −10 ).
the precision and recall is low potentially because the tweets are short and noisy.
contrasting
train_20863
It shows that even simple features and off-the-shelf predicted as Tease Not Tease 52 47 Not 26 559 Table 4: Confusion Matrix of Teasing Classification classifier can detect some signal in the text.
the accuracy is not high.
contrasting
train_20864
Some recovered topics, including the ones shown here, provide valuable insight into bullying traces.
not all topics are interpretable to social scientists.
contrasting
train_20865
There is considerable work on identifying the source of an opinion.
it is much harder to find obvious features that tell us whether "virtualization" is the target of an opinion.
contrasting
train_20866
It is empirically observed that contextualized word types can assume very few (most often, one) POS tags.
along with graph smoothness terms, they apply a penalty that encourages distributions to be close to uniform, the premise being that it would maximize the entropy of the distribution for a vertex that is far away or disconnected from a labeled vertex.
contrasting
train_20867
In particular, the E-step for EM can be written as where Q is the space of all distributions.
while EM produces a distribution in the E-step, hard EM is thought of as producing a single output given by one can also think of hard EM as producing a distribution given by q = δ(h = h * ).
contrasting
train_20868
Lastly, the range of γ from ∞ to 1 has been used in deterministic annealing for EM (Rose, 1998;Ueda and Nakano, 1998;Hofmann, 2001).
the focus of deterministic annealing is solely to solve the standard EM while avoiding local maxima problems.
contrasting
train_20869
We omit the graph for entity prediction because EM-based approaches do not outperform the supervised baseline there.
notably, for entities, for κ = 10%, UEM outperforms CoDL and PR and for 20%, the supervised baseline outperforms PR statistically significantly.
contrasting
train_20870
This is crucial in structured SVM, because solving the dual problem is cubic in terms of the number of examples and constraints.
our approach selects the slack such that at least one of the constraints is satisfied and adds all the remaining constraints to the active set.
contrasting
train_20871
When they work correctly, these tools allow users to maintain clear communication while potentially increasing the rate at which they input their message, improving efficiency in communication.
when these tools make a mistake, they can cause problematic situations.
contrasting
train_20872
Traditional spell checking systems generally assume that misspellings are unintentional.
much of the spelling variation that appears in text messages may be produced intentionally.
contrasting
train_20873
Note that, in this example, the word Tussaud could be an autocompletion or an autocorrection by the system.
there may be no significant distinction between these two operations from a user's point of view.
contrasting
train_20874
It is difficult to assess the appropriate precision-recall tradeoff without an in-depth study of autocorrection usage by text messagers.
a few observations can be made from the precision-recall curve.
contrasting
train_20875
The end-to-end system can reach a recall level of 0.674, significantly lower than the recall of the ground truth system.
the system still peaks at precision of 1, and was able to produce precision values that were competitive with the ground truth system at lower recall levels, maintaining a precision of above 0.90 until recall reached 0.396.
contrasting
train_20876
This may be true for a well-studied language like English, where we can easily compose a rule that disallows coreference between two mentions if they disagree in number and gender, for instance.
computing these features may not be as simple as we hope for a language like Chinese: the lack of morphology complicates the determination of number information, and the fact that most Chinese first names are used by both genders makes gender determination difficult.
contrasting
train_20877
Following common practice, we stemmed the parallel corpus using the Porter stemmer (Porter, 1980) in order to reduce data sparseness.
even with stemming, we found that many English words were not aligned to any French words by the resulting alignment model.
contrasting
train_20878
Nevertheless, one reason why this method is intuitively better is that it ensures that the training and test documents are drawn from the same domain.
when projecting annotations via a parallel corpus, we may encounter a domain mismatch problem if the parallel corpus and the test documents come from different domains, and the coreference resolver may not work well if it is trained and tested on different domains.
contrasting
train_20879
State features consider relating the label y (time-bin) of a single vertex (medical concept) to features corresponding to a medical concept x, and are given by, Transition features consider the mutual dependence of labels y i−1 and y i (dependence between the time-bins of the current and previous medical event in the sequence) and are given by, Above, s j is a state feature function, and λ j is its associated weight and t k is a transition function, and µ k is its associated weight.
to the state function, the transition function takes as input the current label as well as the previous label, in addition to the data.
contrasting
train_20880
For example, in a relatively structured scenario like compliance training, it may be better to reduce any possibility of confusion by eliminating false positives.
a self-motivated learner attempting to explore a new topic may tolerate a higher false positive rate in exchange for a broader diversity of questions.
contrasting
train_20881
One can imagine automatically mining image/caption data (like that in Figure 1) to train object recognition systems.
in order to do so reliably, one must know whether the "car" actually appears or not.
contrasting
train_20882
Probabilistic word alignment models can induce bilexical distributions over target-language translations of source-language words (Brown et al., 1993).
word-to-word correspondences do not capture the full structure of a bilingual lexicon.
contrasting
train_20883
Uszkoreit and Brants 2008 (Diab and Resnik, 2002;Kaji, 2003;Ng et al., 2003;Tufis et al., 2004;Apidianaki, 2009) relates to our work in that these approaches discover word senses automatically through clustering, even using multilingual parallel corpora.
our task of clustering multiple words produces a different type of output from the standard word sense induction task of clustering in-context uses of a single word.
contrasting
train_20884
In theory, we could enable all 2 C possible component combinations, although we expect to use far less.
constraining the SCTM's topics by the components gives less flexible topics as compared to LDA.
contrasting
train_20885
On the other hand, constraining the SCTM's topics by the components gives less flexible topics as compared to LDA.
we find empirically that a large number of topics can be effectively modeled with a smaller number of components.
contrasting
train_20886
Representing bilingual sentences as a sequence of operations enables them to memorize phrases and lexical reordering triggers like PBSMT.
using minimal units during decoding and searching over all possible reorderings means that hypotheses can no longer be arranged in 2 m stacks.
contrasting
train_20887
Phrase-based SMT on the other hand overcomes these drawbacks by using larger translation chunks during search.
the drawback of the phrase-based model is the phrasal independence assumption, spurious ambiguity in segmentation and a weak mechanism to handle non-local reorderings.
contrasting
train_20888
We investigate the addition of MTUs to a phrasal translation system to improve modeling of context and to provide more robust estimation of long phrases.
in a phrase-based system there is no single synchronized traversal order; instead, we may consider the translation units in many possible orders: left-to-right or right-to-left according to either the source or the target are natural choices.
contrasting
train_20889
Unlike the maximum entropy model, we make no attempt to use entire phrases or phrasepairs as features, as they would be far too sparse for our small tuning sets.
due to the sparse features' direct decoder integration, we have access to a fair amount of extra context.
contrasting
train_20890
We can express this with constraints: After adding the constraints, the probability of the sequence is maximized when each word is assigned the tag with highest probability.
some invalid results may still exist.
contrasting
train_20891
Therefore, these hard bilingual constraints guarantee that when two words are aligned, they are tagged with the same named entity tag.
in practice, aligned word pairs do not always have the same tag because of the difference in annotation standards across different languages.
contrasting
train_20892
This condition can be regarded as a kind of hard word alignment.
the following problem exists: the smaller the θ, the noisier the word alignments are; the larger the θ, the more possible word alignments are lost.
contrasting
train_20893
(2012) proposed a method of labeling bilingual corpora with named entity labels automatically based on Wikipedia.
this method is restricted to topics covered by Wikipedia.
contrasting
train_20894
Zhuang and Zong (2010) proposed a joint inference method for bilingual semantic role labeling with ILP.
their approach requires training an alignment model with a manually annotated corpus.
contrasting
train_20895
P&D acquired more content word variation pairs as the curves labeled by cwv indicates.
proposed Score 's precision outperformed p&D's by a large margin for the three languages.
contrasting
train_20896
Here one aligns existing database records with the sentences in which these records have been "rendered"--effectively labeling the text-and from this labeling we can train a machine learning system as before (Craven and Kumlien, 1999;Mintz et al., 2009;Bunescu and Mooney, 2007;Riedel et al., 2010).
this method relies on the availability of a large database that has the desired schema.
contrasting
train_20897
This is similar in spirit to work on learning entailment rules (Szpektor et al., 2004;Zanzotto et al., 2006;Szpektor and Dagan, 2008).
for us even entailment rules are just a by-product of our goal to improve prediction, and it is this goal we directly optimize for and evaluate.
contrasting
train_20898
Methods that learn rules between textual patterns in OpenIE aim at a similar goal as our proposed approach (Schoenmackers et al., 2008;Schoenmackers et al., 2010).
their approach is substantially more complex, requires a categorization of entities into fine grained entity types, and needs inference in high tree-width Markov Networks.
contrasting
train_20899
By contrast, for a surface pattern like "X visits Y" X could be a person or organization, and Y could be a location, organization or person.
in terms of MAP score this time there is no obvious winner among the latent models.
contrasting