id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14600
(Bach et al., 2011)) to update the cost of the search graph hypotheses.
contrarily to Luong et al.
contrasting
train_14601
In the adaptation task, the discriminative training of the CTM gives a large improvement of 0.9 BLEU score over the CTM only trained with NCE and 1.9 over the baseline system.
for the training scenario, these gains are reduced respectively to 0.4 and 1.2 BLEU points.
contrasting
train_14602
Suggestions from a machine translation system can increase the speed and quality of professional human translators (Guerberof, 2009;Plitt and Masselot, 2010;Green et al., 2013a, inter alia).
querying a single fixed model for all different documents fails to incorporate contextual information that can potentially improve suggestion quality.
contrasting
train_14603
Other popular metrics like METEOR (Denkowski and Lavie, 2014) and TERp (Snover et al., 2008) also use external resources like WordNet and paraphrase databases.
system-level correlation with human judgements for these metrics remains below 0.90 Pearson correlation coefficient (as per WMT-14 results, BLEU-0.888, NIST-0.867, METEOR-0.829, TER-0.826, WER-0.821).
contrasting
train_14604
2) of any aligned phrase pairs (s, t,â), assuming it is composed of embeddable words.
we found the supervised word translation scores q to be too sharp, sometimes assigning all probability mass to a single target word.
contrasting
train_14605
As before, we added our supervised lexical weights as new features in the phrase table.
instead of fixing β = 0.95 as above, we searched for β ∈ {0.9, 0.8, 0.7, 0.6} in Eq.
contrasting
train_14606
Note that LSA on just the English data performs on par with all of the other methods presented; we have not found a way to improve performance on this monolingual task from using multilingual data.
1 it is also important to note that our multilingual methods do not hurt performance on these monolingual tasks, either-we get the benefits described in our other evaluations without losing performance on English-only tasks.
contrasting
train_14607
This technology has advanced to the point where it is becoming capable enough to be useful for many applications.
this approach may be unsuitable for simultaneous interpretation where the machine translation system is required to provide translations within a reasonably short space of time after words have been spoken.
contrasting
train_14608
The default rule scoring procedure for string-to-tree rules implemented in Moses uses the same normalization as we do.
williams and Koehn (2012) propose to normalize string-to-tree rules over the source rhs only.
contrasting
train_14609
Such an approach has already shown to be useful for several NLP tasks (Volkova et al., 2013;Hovy, 2015).
before embarking on this challenging task, we explore if the above concerns are founded by addressing the research question: does MT has an impact on the classification of demographic and personality traits?
contrasting
train_14610
5 The WIT3 data seems to also include data in the fr-en direction.
in practice, TED hosts only talks in English and all foreign to English corpora were collected from the translated versions of the site.
contrasting
train_14611
It has been shown that standard approaches to gender classification on English texts can be sub-optimal for non-English language data (Ciot et al., 2013).
state-of-the-art classification results are not our focus; rather, our intention is to understand the impact of translation on classification of socio-demographic and personality traits.
contrasting
train_14612
We are interested in understanding the impact which the consideration of author traits might have on automatic translation, in order to preserve projection of those traits in a target language.
it is first necessary to understand the inverse: the effect of current translation approaches on the computational recognition of these traits.
contrasting
train_14613
This idea allows processing corpus aligned at sentence-level rather than word-level.
it does not leverage the abundance of existing mono-lingual corpora .
contrasting
train_14614
For CONTINGENCY, Condition is not classified as continuous or discontinuous, and it is highly marked, thus driving the overall score high.
if Condition is removed, then CONTIN-GENCY will be the least marked relation at the level 1.
contrasting
train_14615
Results show low to moderate improvement over the baseline for gender classification, with the combination of semantic features and unigrams being the best performing feature set.
our classifiers performed poorly in the age prediction task, with accuracies below the majority class baseline.
contrasting
train_14616
For instance, regardless of their gender, older deceivers use references to anxiety, money, and motion.
younger deceivers language includes anger, negate, and death words.
contrasting
train_14617
Through several experiments, we showed that this data can be used to build deception classifiers for short open domain text.
the classifiers do not per-form very well while trying to predict gender and age.
contrasting
train_14618
Another proposed model of phonotactics is the Minimal Generalization Learner, or MGL (Albright, 2009); Linzen and Gallagher (2014) showed that MGL can simulate relevant human behavioral data in some circumstances.
with PAIM and MaxEnt, which converge to the empirical distribution given sufficient data, MGL reserves a fixed amount of probability mass to unseen events.
contrasting
train_14619
Such a comprehensive type system makes it possible to define various kinds of functions and to perform type-compatibility checking.
most previous semantic languages have at most 100+ types at the grammar level.
contrasting
train_14620
instances with more than one role), to allow generalization of flat structures.
our trees are unlike syntactic constituent trees in that they do not have labeled nonterminal nodes, so we have no natural choice of an intermediate ("bar") label.
contrasting
train_14621
For example, in the sentence "The big fish ate the little fish," ini- tially both English 'fish' are aligned to both AMR fish.
based on the context of 'big' and 'little' the spurious links are removed.
contrasting
train_14622
We would like to tune our feature weights to maximize Smatch directly.
a very convenient alternative is to compare the AMRese yields of candidate AMR parses to those of reference AMRese strings, using a BLEU objective and forest-based MIRA (Chiang et al., 2009).
contrasting
train_14623
(2015) also uses a two-pass approach; dependency parses are modified by a tree-walking algorithm that adds edge labels and restructures to resolve discrepancies between dependency standards and AMR's specification.
to these works, we use a single-pass approach and re-use existing machine translation architecture, adapting to the AMR parsing task by modifying training data and adding lightweight AMRspecific features.
contrasting
train_14624
At a particular training size, a linear regression model has estimated that improving alignment quality by 1 edit distance toward the gold standard alignments leads to an 3.80-4.70% increase in G2P transcription accuracy.
we have also found that the importance of good alignments on G2P accuracy appears to dimish as data set size increases, possibly because the translation modules can accomodate more 'noisy' data in this scenario.
contrasting
train_14625
As in existing statistical IM engines, parameters are estimated from a corpus whose sentences are segmented into words annotated with their pronunciations as follows: where F (•) denotes the frequency of a pair sequence in the corpus.
to IM engines based on a word n-gram model, ours does not require an additional model describing relationships between words and pronunciations, and thus it is much simpler and more practical.
contrasting
train_14626
Example of Log-mconv (3 annotations) As this example shows, Log-mconv contains short entries (fragmentation) like Log-as-is.
we expect that the annotated tweets do not include mistaken boundaries or conversions that were discarded.
contrasting
train_14627
This practice means that these methods can typically yield high performance in certain specialized domains, but they lack generalizability.
with these methods, we propose to leverage bilingual unlabeled data, i.e., a Chinese-English corpus with sentence alignment.
contrasting
train_14628
The results presented in Table 4 indicate that SLBD demonstrates much stronger performance, primarily because these other methods were developed with a focus on SMT, which causes them to preferentially decrease the perplexity of the subsequent SMT steps rather than producing a highly accurate segmentation.
to these methods, the SLBD method exhibits greater generalizability.
contrasting
train_14629
With HPBSMT, a restricted form of an SCFG, i.e., Hiero grammar, is usually used and is especially suited for linguistically divergent language pairs, such as Japanese and English.
a rule table, i.e., a synchronous grammar, may be composed of spuriously many rules with potential errors especially when it was automatically acquired from a parallel corpus.
contrasting
train_14630
Rule arithmetic method (Cmejrek and Zhou, 2010) can generate SCFG rules by combining other rule pairs through an insideoutside algorithm.
those previous attempts were restricted in that the rules and phrases were induced by heuristic combination.
contrasting
train_14631
If r k already exists in a table, we draw r k with probability where c k is the number of customers of r k , n r is the number of all customers and φ r k is a number of r k 's tables.
if r k is a new rule, we draw r k with probability where |φ r | is the number of tables in the model.
contrasting
train_14632
As a result, larger phrase pairs were forced to be constructed from those minimal rules.
our back-off model could directly express phrase pairs of multiple granularities.
contrasting
train_14633
As a result, they have to resort to realigning.
the consistency constraint used in most translation rule extraction algorithms tolerate wrong links within consistent phrase pairs.
contrasting
train_14634
In Figure 1, (f 3 1 , e 3 1 ) is consistent with the alignment because all words in "oumeng he eluosi" are aligned with all words in "EU and Russia".
in Figure 1(b), "huiwu shounao" and "hold summit" are not consistent with the alignment because "hold" is also aligned to a word "juxing" outside.
contrasting
train_14635
In contrast, in Figure 1(b), "huiwu shounao" and "hold summit" are not consistent with the alignment because "hold" is also aligned to a word "juxing" outside.
alignment consistency only defines a loose relationship between alignment and translation.
contrasting
train_14636
A natural way is to include consistency in the optimization objective as a regularization term.
as consistency is only defined at the phrase level (see Definition 4), we need a sentence-level measure to reflect how well an alignment conforms to the consistency constraint.
contrasting
train_14637
Another alternative is reachability (Liang et al., 2006a;Yu et al., 2013) that indicates whether there exists a full derivation to recover the training data.
calculating reachability faces a major problem: a large portion of training data cannot be fully recovered due to noisy alignments and the distortion limit (Yu et al., 2013).
contrasting
train_14638
(2013) indicate that using forced decoding to select reachable sentences with an unlimited distortion limit runs in O(2 n n 3 ) time.
calculating coverage is much easier and more efficient by ignoring the dependency between phrases but still retains the spirit of measuring recovery.
contrasting
train_14639
Efforts using source-side associations mainly focus on the exploitation of either sentence-level context (Chan et al., 2007;Carpuat and Wu, 2007;Hasan et al., 2008;Mauser et al., 2009;Shen et al., 2009) or the utilization of document-level context (Xiao et al., 2011;Ture et al., 2012;Xiao et al., 2012;Xiong et al., 2013).
target-side dependencies attract little attention, although they have an important impact on the accuracy of lexical selection.
contrasting
train_14640
Here both the baseline and LRSM fail to obtain the right translation for the word "n" because "palestine" has a higher probability than "pakistan" (0.0374 vs 0.0285).
in our model, the long-distance dependencies between ("musharraf", "pakistan") and ("taliban", "pakistan") help the decoder correctly choose the translation "pakistan" for "n".
contrasting
train_14641
real-valued vectors, i.e., word embeddings.
translation units in machine translation have long since shifted from words to phrases (sequence of words), of which syntactic and semantic information cannot be adequately captured and represented by word embeddings.
contrasting
train_14642
as an additional model parameter along with the regular parameters, i.e., weights, look-up vectors.
it has been shown that fixing Z(.)
contrasting
train_14643
The ability to generalize and learn complex semantic relationships (Mikolov et al., 2013b) and its compelling empirical results gives a strong motivation to use the NNJM model for the problem of domain adaptation in machine translation.
the vanilla NNJM described above is limited in its ability to effectively learn from a large and diverse out-domain data in the best favor of an indomain data.
contrasting
train_14644
The bias created due to the out-domain data caused S cat to choose the contextually incorrect translation unwanted pregnancy.
the adapted systems S v * were able to translate it (How about fitness?
contrasting
train_14645
We were able to obtain an average gain of +0.3 BLEU points by training an NDAM v1 model over the selected data (see S v1+mml ).
on English-German, the MML-based selection caused a drop in the performance (see Table 6).
contrasting
train_14646
With the help of U.S. army, these soldiers are searching and suppressing members of Abu Sayyaf.
there is not much achievement this far.
contrasting
train_14647
The need for identifying content-heavy sentences arises in many specialized domains, including dialog systems, machine translation, text simplification and Chinese language processing but it is usually addressed in an implicit or application specific way.
we focus on identifying heavy sentences as a standalone task, providing a unifying view of the seemingly disparate strands of prior work.
contrasting
train_14648
Most recently, text simplification has been addressed as a monolingual machine translation task from complex to simple language (Specia, 2010;Coster and Kauchak, 2011;Wubben et al., 2012).
simplification by repackaging the content into multiple sentences is not naturally compatible with the standard view of statistical MT in which a system is expected to produce a single output sentence for a single input sentence.
contrasting
train_14649
Similar to work in text simplification, the simplification rules are applied to all sentences meeting certain criteria, normally to all sentences longer than a predefined threshold or where certain conjunctions or coordinations are present.
the model we propose here can be used to predict when segmentation is at all necessary.
contrasting
train_14650
Particularly, Liu and Huang (2014) show that by requiring the conventional parameter tuning algorithms to consider the final decoding results (full translations) as well as the intermediate decoding states (partial translations) at the same time, the inexact decoding can be significantly improved so that correct intermediate partial translations are more likely to survive the beam.
the underlying phrase-based decoding model suffers from limited distortion, and thus, may not be flexible enough for translating language pairs that are syntactically different, which require long distance reordering.
contrasting
train_14651
Partial BLEU is quite an intuitive choice for evaluating partial derivations.
as Liu and Huang (2014) discussed, partial BLEU only evaluates the partial derivation itself without considering any of its context information, leading to a performance degradation.
contrasting
train_14652
Compared to the concatenation method, top-down is able to consider the reordering between spans, and thus would be much better.
since it bases on the k-best list in the final beam, it can only handle the spans appearing in the final k-best derivations.
contrasting
train_14653
For the spans that top-down or guided backtrace algorithm cannot get outside strings, we use concatenation for them to maintain consistent number of tuning instances between different tuning iterations.
since we do not want to spent much effort on them, we only use the one-best partial derivation for each of them.
contrasting
train_14654
In the former, the transferred tags are used to train a partially-observed CRF (PO-CRF) by maximizing the probability of a constrained lattice.
instance-based learning views each word token as an independent classification task, but uses latent distributional information gleaned from surrounding words as features.
contrasting
train_14655
Most of the missing assignments are for foreign proper nouns, which often do not receive case markers.
this is not done consistently in the training data we use.
contrasting
train_14656
The architecture is shown in Figure 1(d).
figure 1(e) directly combines discrete and continuous features, replacing the hard-coded transformation function of Guo et al.
contrasting
train_14657
In this sense, the method of Chen and Manning resembles a traditional supervised sparse linear model, which can be weak on OOV.
the semi-supervised learning methods such as Turian et al.
contrasting
train_14658
A two-stage method (McDonald, 2006) is often used because the complexity of some joint learning models is unacceptably high.
joint learning models can benefit from edge-label information that has proven to be im-portant to provide more accurate tree structures and labels (Nivre and Scholz, 2004).
contrasting
train_14659
Most work on unsupervised domain adaptation in NLP uses batch learning: It assumes that a large corpus of unlabeled data of the target domain is available before testing.
batch learning is not possible in many real-world scenarios where incoming data from a new target domain must be processed immediately.
contrasting
train_14660
With S RANK , there is very little advantage over looking at just pairs.
with S EMBED or S PMI the improvement of using the triplet-based method over using just the head-modifier pairs is clear.
contrasting
train_14661
These features can be integrated naturally as atomic inputs to the embedding layer of the network and the model can learn arbitrary conjunctions with all other features through the hidden layers.
integrating such features into a model with discrete features requires nontrivial manual tweaking.
contrasting
train_14662
Model combination techniques have consistently shown state-of-the-art performance across multiple tasks, including syntactic parsing.
they dramatically increase runtime and can be difficult to employ in practice.
contrasting
train_14663
Given these scores, a natural way to obtain weights is to normalize the probabilities.
parsers do not always provide accurate estimates of parse quality.
contrasting
train_14664
Nicely played!, the prediction of the word played, depends on both the syntactic relation from nicely, which narrows down the list of candidates to verbs, and on the semantic relation from game, which further narrows down the list of candidates to verbs related to games.
the words we and the add very little to this particular prediction.
contrasting
train_14665
On the other hand, the words we and the add very little to this particular prediction.
the word the is important for predicting the word game, since it is generally followed by nouns.
contrasting
train_14666
The continuous bag-of-words model differs from other proposed models in the sense that its complexity does not rise substantially as we increase the window b, since it only requires two extra additions to compute c, which correspond to d w operations each.
the skipn-gram model requires two extra predictions corresponding to d w × V operations each, which is an order of magnitude more expensive even when subsampling V .
contrasting
train_14667
On the other hand, the skipn-gram model requires two extra predictions corresponding to d w × V operations each, which is an order of magnitude more expensive even when subsampling V .
the drawback the bagof-words model is that it does not learn embeddings that are prone for learning syntactically oriented tasks, mainly due to lack of sensitivity to word order, since the context is defined by a sum of surrounding words.
contrasting
train_14668
(2013) and Nivre and Fernandez-Gonzalez (2014), who each present modifications to the arc-eager transition system that introduce some non-monotonic behaviour, resulting in small improvements in accuracy.
these systems only apply non-monotonic transitions to a relatively small number of configurations, so they can only have a small impact on parse accuracy.
contrasting
train_14669
PPDB is a natural resource for paraphrases.
ppDB was not built with the specific application to SMT in mind.
contrasting
train_14670
Note that the JTR source and target sides include jump information, therefore, the RNN model described above explicitly models reordering.
the models proposed in (Sundermeyer et al., 2014) do not include any jumps, and hence do not provide an explicit way of including word reordering.
contrasting
train_14671
Comparing between CNN and AverageSG, we can conclude that deep semantic compositionality is crucial for understanding the semantics and the sentiment of documents.
it is somewhat disappointing that these models do not significantly outperform discrete bag-of-ngrams and bag-of-features.
contrasting
train_14672
Afterwards, relation-specific gated RNN can be developed to explicitly model semantic composition rules for each relation (Socher et al., 2013a).
defining such a relation scheme is linguistic driven and time consuming, which we leave as future work.
contrasting
train_14673
The state-of-the-art model for opinion target extraction is also based on a CRF (Pontiki et al., 2014).
the success of CRFs depends heavily on the use of an appropriate feature set and feature function expansion, which often requires a lot of engineering effort for each task in hand.
contrasting
train_14674
One simple way to overcome this issue is to use a truncated BPTT (Mikolov, 2012) for restricting the backpropagation to only few steps like 4 or 5.
this solution limits the RNN to capture long-range dependencies.
contrasting
train_14675
Our window-based approach, by considering the neighboring words, already captures short-term dependencies like this from the future.
it requires tuning to find the right window size, and it disregards long-range dependencies that go beyond the context window, which is typically of size 1 (i.e., no context) to 5 (see Section 5.2).
contrasting
train_14676
This finding contrasts the finding of Irsoy and Cardie (2014) in opinion expression detection task, where bi-directional Elman RNNs outperform their uni-directional counterparts.
when we analyzed the data, we found it to be unsurprising because aspect terms are generally shorter than opinion expressions.
contrasting
train_14677
The approach is closely related to the hybrid dependency model (Clark and Curran, 2007).
the CCGbank dependencies used by Clark and Curran's model constrain all lexical and attachment decisions (only allowing 'spurious' derivational ambiguity) whereas our use of semantic dependencies models most of the syntactic parse as latent.
contrasting
train_14678
Unsurprisingly, verb arguments and adjuncts relations show large improvements.
we also see more accurate attachment of relative clauses.
contrasting
train_14679
In the SMT community, researchers have developed standard, proven alignment tools such as GIZA++ (Och and Ney, 2003), which can be used to train IBM Models 1-5.
there is one fundamental problem with the IBM models (Brown et al., 1993): each word on one side can be traced back to exactly one particular on the other word (or the null token which indicates the word aligns to no word on the other side).
contrasting
train_14680
Applying such string-to-tree or tree-to-tree translation models (Yamada and Knight, 2001;Shen et al., 2008) to semantic parsing will naturally resolve the inconsistent semantic structure issue, though they require additional information to generate tree labels on the target side.
due to the constraint that each target phrase needs to map to a syntactic constituent, phrase tables in tree-based translation models usually suffer from the low coverage issue, especially if the training data size is small.
contrasting
train_14681
Concretely, if for some literal l j , A diagram (l i ) < , then GEOS disregards the text score of l i by replacing A text (l j ) with A diagram (l j ).
even if the diagram score of a literal is very high, it is still possible that the literal is false, because many diagrams are not drawn to scale.
contrasting
train_14682
The rule-based text parsing baseline achieves a high precision, but at the cost of lower recall.
the baseline GEOS without diagram achieves a high recall, but at the cost of lower precision.
contrasting
train_14683
The efficiency gain comes from the fact that we can just do the multiplication once per path type, instead of once per path type per source node.
to correctly compute the probabilities for a (source, target) pair, we need to exclude from the graph the edge connecting that training instance.
contrasting
train_14684
This captures in just two features all path types of the form -ALIAS-[some textual relation]-ALIAS -1 -, and those two bigram features are almost always the highest weighted features in models where they are used.
the bigram features do not capture those path types exactly.
contrasting
train_14685
Additionally, we evaluate the approach on a dataset that contains rich prior information from the training knowledge base, as well as a wealth of textual information from a large document collection.
to the work reviewed so far, work on sentence-level relation extraction using direct supervision has focused heavily on representing sentence context.
contrasting
train_14686
In the standard latent feature models discussed above, each textual relation is treated as an atomic unit receiving its own set of latent features.
many textual relations differ only slightly in the words or dependency arcs used to express the relation.
contrasting
train_14687
It is commonplace to represent words as vectors.
to naïve models in which all word types in a vocabulary V are equally different from each other, vector space models capture the intuition that words may be different or similar along a variety of dimensions.
contrasting
train_14688
In contrast, we would like to compose representations of characters into representations of words.
the relationship between words forms and their meanings is non-trivial (de Saussure, 1916).
contrasting
train_14689
We can see that even without feature engineering or unsupervised pretraining, our C2W model (row "C2W") is on par with the current state-of-the-art system (row "structReg").
if we add hand-crafted features, we can obtain further improvements on this dataset (row "C2W + features").
contrasting
train_14690
Formally, the cost function is defined as: where s 1 , s 2 are the input sentences, f the compositional layer (so f (s 1 ) and f (s 2 ) refer to sentence vectors), and y = 1 denotes a paraphrase relationship between the sentences; m stands for the margin, a hyper-parameter chosen in advance.
the cost function based on cosine similarity handles only directional differences, as follows: where is the cosine similarity of the two sentence vectors, w and b are the scaling and shifting parameters to be optimized, σ is the sigmoid function and y is the label.
contrasting
train_14691
The small gains of the prior disambiguation models over the ambiguous models clearly show that deep architectures are quite capable of performing this elementary form of sense selection intrinsically, as part of the learning process itself.
the situation changes when the dynamic disambiguation framework is used, where the gains over the ambiguous version become more significant.
contrasting
train_14692
This approach assumes each post belongs to a different topic in the thread.
to RBT, ART contains too many topic branches, one per post in the thread.
contrasting
train_14693
This yields collection-specific background topics by using a binomial distribution instead of a multinomial.
we prefer the simpler, non-tree version because background topics are unnecessary when using an asymmetric α prior (Wallach et al., 2009a).
contrasting
train_14694
Held-out perplexity on real data provides a quantitative evaluation of our models' performance in a real-world setting.
the goal of our models is to enable a deeper analysis of large, weakly-related corpora, which we next discuss.
contrasting
train_14695
In addition, Kim (2014) combines CNNs of different filter widths and either static or fine-tuned word vectors.
to the traditional CNN models, our method considers non-consecutive n-grams thereby expanding the representation capacity of the model.
contrasting
train_14696
For example, the convolutional model with multi-channel (CNN-MC) runs over 2400 seconds per training epoch.
our full model (with 3 feature layers) runs on average 28 seconds with only root labels and on average 445 seconds with all labels.
contrasting
train_14697
One straightforward way to compare them is to flatten the sentence representations into two vectors, then use standard metrics like cosine similarity.
this may not be optimal because different regions of the flattened sentence representations are from different underlying sources (e.g., groups of different widths, types of pooling, dimensions of word vectors, etc.).
contrasting
train_14698
The posterior distribution of run length P (r t |x 1:t ) can be computed recursively: where: The joint distribution over run length r t and data x 1:t can be derived by summing P (r t , r t−1 , x 1:t ) over r t−1 : This formulation updates the posterior distribution of the run length given the prior over r t from r t−1 and the predictive distribution of new data.
the existing BO-CPD model (Adams and MacKay, 2007) specifies the conditional prior on the change point P (r t |r t−1 ) in advance.
contrasting
train_14699
If the same data is given, BO-CPD gives us the same answer to a question whether an abrupt change at time t is a change point or not.
dBO-CPd uses documents d γ t for its prediction to incorporate the external information which cannot be inferred only from the data.
contrasting