id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_2300
On the other hand, instead of directly merging TM matched phrases into the source sentence, some approaches (Biç ici and Dymetman, 2008;Simard and Isabelle, 2009) simply add the longest matched pairs into SMT phrase table, and then associate them with a fixed large probability value to favor the corresponding TM target phrase at SMT decoding.
since only one aligned target phrase will be added for each matched source phrase, they share most drawbacks with the pipeline approaches mentioned above and merely achieve similar performance.
contrasting
train_2301
This is because the features adopted in their discriminative learning are complicated and difficult to re-implement.
the proposed Model-III even outperforms the upper bound of their methods, which will be discussed later.
contrasting
train_2302
It can be seen that "you do" is redundant for Koehn-10, because they are insertions and thus are kept in the XML input.
sMT system still inserts another "you", regardless of "you do" has already existed.
contrasting
train_2303
Moreover, following the approaches of Koehn-10 and Ma-11 (to give a fair comparison), training data for SMT and TM are the same in the current experiments.
the TM is expected to play an even more important role when the SMT training-set differs from the TM database, as additional phrase-pairs that are unseen in the SMT phrase table can be extracted from TM (which can then be dynamically added into the SMT phrase table at decoding time).
contrasting
train_2304
The multi-task learning methods performed best when using the annotator identity as the task descriptor, and less well for the MT system and sentence pair, where they only slightly improved over the baseline.
making use of all these layers of metadata together gives substantial further improvements, reaching the best result with Combined A,S,T .
contrasting
train_2305
On the other hand, there has been research on training object and event models from large corpora of complex images and video in the computer-vision community (Kuznetsova et al., 2012;Sadanand and Corso, 2012;Ordonez et al., 2011;Yao et al., 2010).
most such work requires training data that labels individual concepts with individual words (i.e., ob-jects delineated via bounding boxes in images as nouns and events that occur in short video clips as verbs).
contrasting
train_2306
Previous work (e.g., Morante and Blanco, 2012) has investigated automatically detecting the scope and focus of negation.
the scope of negation with respect to quantifiers is a different phenomenon.
contrasting
train_2307
Manshadi and Allen (2011a), hence MA11, go beyond those limitations and scope an arbitrary number of NPs in a sentence with no restriction on the type of quantification.
although their corpus annotates the scope of negations and the implicit universal of plurals, their QSD system does not handle those.
contrasting
train_2308
Many ranking systems create partial orders as output when the confidence level for the relative order of two objects is below some threshold.
the target being a partial order is a fundamentally different problem.
contrasting
train_2309
From these examples, as long as we create two nodes in the DAG corresponding to each plural chunk, and one node corresponding to each negation, there is no need to modify the underlying model (defined in the previous section).
when u (or v) is a negation (N i) or an implicit universal (id) node, the probabilities p λ u,v (λ ∈ {+, −, }) may come from a different source, e.g.
contrasting
train_2310
We take MA11's system as the baseline.
in order to have a fair comparison, we have used the output of the Stanford parser to automatically generate the same features that MA11 have hand-annotated.
contrasting
train_2311
Because of the ambiguity, a local classifier may miss it or mislabel it as a trigger of End-Position.
knowing that "tank" is very likely to be an Instrument argument of Attack events, the correct event subtype assignment of "fired" is obviously Attack.
contrasting
train_2312
arbitrary global features over multiple local predictions.
different from easier tasks such as part-of-speech tagging or noun phrase chunking where efficient dynamic programming decoding is feasible, here exact joint inference is intractable.
contrasting
train_2313
Assigning labels based on thread boundaries allows for context to be meaningfully taken into account, without crossing topic boundaries.
this granularity comes with a price: the distribution of class values in these instances is highly skewed.
contrasting
train_2314
Exact inference is intractable due to the E factors that couple all of the a i by way of the p i nodes.
we can compute approximate marginals for the a i , p i , and r i using belief propagation.
contrasting
train_2315
For example, Figure 1(b) shows the structure of the word "建筑业 (construction and building industry)", where the characters "建 (construction)" and "筑 (building)" form a coordination, and modify the character "业 (industry)".
computational processing of Chinese is typically based on words.
contrasting
train_2316
With these methods, transition-based parsers have reached state-of-the-art accuracy for a number of languages (Zhang and Nivre, 2011;Bohnet and Nivre, 2012).
the drawback with this approach is that parsing speed is proportional to the size of the beam, which means that the most accurate transition-based parsers are not nearly as fast as the original greedy transition-based parsers.
contrasting
train_2317
This is bad, because the information about the correct attachment could come from the lexical content of node P. The arc-eager model performs slightly better, since it can delay the decision up to the point in which α 1 has been constructed and P is read from the buffer.
at this point it must make a commitment and either construct α 3 or pop N1 from the stack (implicitly committing to α 2 ) before N2 is read from the buffer.
contrasting
train_2318
However, at this point it must make a commitment and either construct α 3 or pop N1 from the stack (implicitly committing to α 2 ) before N2 is read from the buffer.
with this scenario, in the next sections we implement a dynamic parsing strategy that allows a transition system to decide between the attachments α 2 and α 3 after it has seen all of the four nodes V, N1, P and N2.
contrasting
train_2319
More specifically, (i) generalizes the la/ra transitions to the la k /ra k transitions, introducing a top-down component into the purely bottom-up arc-standard.
(ii) drops the limitation of canonical computations for the arc-standard, and leverages on the spurious ambiguity of the parser to enlarge the search space.
contrasting
train_2320
In (d) and (e) in Figure 2, the word (kare) at the CP and the word order between katta and karita are the same.
the word at the NP for (d) and the word at the NP for (e) are different.
contrasting
train_2321
In the pair model, the position pair of (i, j) is used to derive features.
to descriminate label sequences in the sequence model, the position pairs of (i, k), k ∈ {k|i < k ≤ j ∨ j ≤ k < i} and (k, j), k ∈ {k|i ≤ k < j ∨ j < k ≤ i} are used to derive features.
contrasting
train_2322
Figure 5 shows that when a distance class feature used in the model was the same (e.g., distortions from 5 to 20 were the same distance class feature), PAIR produced average distortion probabilities that were almost the same.
the average distortion probabilities for SEQUENCE decreased when the lengths of the distortions increased, even if the distance class feature was the same, and this behavior was the same as that of CORPUS.
contrasting
train_2323
From CORPUS, the average probabilities in the training data for each distortion in [4, 6] were higher than those for each distortion in [7,20].
the converse was true for the comparison between the two average probabilities for the outbound model.
contrasting
train_2324
As a matter of fact, even without any contexts, the lexical translation table in HMM already contains O(|V e | * |V f |) parameters, where |V e | and V f denote source and target vocabulary sizes.
our model does not maintain a separate translation score parameters for every source-target word pair, but computes t lex through a multi-layer network, which naturally handles contexts on both sides without explosive growth of number of parameters.
contrasting
train_2325
The drawback is that there is a margin of error in the parallel segment identification and alignment.
our system can be tuned for precision or for recall.
contrasting
train_2326
This is to be expected, since our dataset was extracted from an extremely different domain.
by combining the Weibo parallel data with this standard data, improvements in BLEU are obtained.
contrasting
train_2327
The fact that gLDA+SVM performs better than the standard gSLDA is due to the same reason, since the SVM part of gL-DA+SVM can well capture the supervision information to learn a classifier for good prediction, while standard sLDA can't well-balance the influence of supervision.
the well-balanced gSLDA+ model successfully outperforms the twostage approach, gLDA+SVM, by performing topic discovery and prediction jointly 4 .
contrasting
train_2328
In our initial modified version, we replaced the gold-standard reference parse with the pseudo-gold reference, which has the highest execution rate amongst all candidate parses.
this ignores all other candidate parses during perceptron training.
contrasting
train_2329
However, this ignores all other candidate parses during perceptron training.
it is not ideal to regard other candidate parses as "useless."
contrasting
train_2330
The results show that our response-based approach (Single) has better execution accuracy than both the baseline and the standard approach using gold-standard parses (Gold).
gold does perform best on parse accuracy since it explicitly focuses on maximizing the accuracy of the resulting MR.
contrasting
train_2331
However, Gold does perform best on parse accuracy since it explicitly focuses on maximizing the accuracy of the resulting MR.
by focusing discriminative training on optimizing performance of the ultimate end task, our response-based approach actually outperforms the traditional approach on the final task.
contrasting
train_2332
For our task evaluation, ideally, we would like the system to be able to identify the news article specifically referred to by the url within each tweet in the gold standard.
this is very difficult given the large number of potential candidates, especially those with slight variations.
contrasting
train_2333
Our paper resembles it in searching for a related news article.
we target on recommending news article only based on a tweet, which is a much smaller context than the set of favorite documents chosen by a user .
contrasting
train_2334
5 Binary perception Although we did not impose a discrete categorization of politeness, we acknowledge an implicit binary perception of the phenomenon: whenever an annotator moved a slider in one direction or the other, she made a binary politeness judgment.
the bound- Quartile: 1 st 2 nd 3 rd 4 th Wiki 62% 8% 3% 51% SE 37% 4% 6% 46% ary between somewhat polite and somewhat impolite requests can be blurry.
contrasting
train_2335
The previous section does not test this hypothesis, since all editors compared in Table 5 had the same (non-admin) status when writing the requests.
our data does provide three ways of testing this hypothesis.
contrasting
train_2336
Politeness marking is one aspect of the broader issue of how language relates to power and status, which has been studied in the context of workplace discourse (Bramsen et al., ;Diehl et al., 2007;Peterson et al., 2011;Prabhakaran et al., 2012;Gilbert, 2012;McCallum et al., 2007) and social networking (Scholand et al., 2010).
this research focusses on domain-specific textual cues, whereas the present work seeks to leverage domain-independent politeness cues, building on the literature on how politeness affects worksplace social dynamics and power structures (Gyasi Obeng, 1997;Chilton, 1990;Andersson and Pearson, 1999;Rogers and Lee-Wong, 2003;Holmes and Stubbe, 2005).
contrasting
train_2337
Let tp i be the number of test essays correctly labeled as positive by error e i 's binary classifier b i ; p i be the total number of test essays labeled as positive by b i ; and g i be the total number of test essays that belong to e i according to the gold standard.
then, the precision (P i ), recall (R i ), and F-score (F i ) for b i and the macro F-score (F) of the combined system for one test fold are calculated by the macro F-score calculation can be seen as giving too much weight to the less frequent errors.
contrasting
train_2338
Similarly, the word "discussions", which is the last word in this sequence, cannot have any right children and we can estimate that its right P stop probability is high.
non-reducible words such, as the verb "asked" in our example, can have children, and therefore their P stop can be estimated as low for both directions.
contrasting
train_2339
If we use a corpus containing about 10,000 sentences, it is possible that we found no reducible sequences at all.
we managed to find a sufficient amount of reducible sequences in corpora containing millions of sentences, see Section 6.1 and Table 1.
contrasting
train_2340
The quality of such tagging is not very high since we do not use any lexicons or pretrained models.
it is sufficient for obtaining usable stop probability estimates.
contrasting
train_2341
All of these formalisms share a similar basic syntactic structure with Penn Treebank CFG.
the target formalisms also encode additional constraints and semantic features.
contrasting
train_2342
As such, probabilistic modeling for TAG in its original form is uncommon.
a large effort in non-probabilistic grammar induction has been performed through manual annotation with the XTAG project (Doran et al., 1994).
contrasting
train_2343
These rules highlight the linguistic intuitions that back TAG; if their adjunction were undone, the remaining derivation would be a valid sentence that simply lacks the modifying structure of the auxiliary tree.
the MPD parses reveal that not all useful adjunctions conform to this paradigm, and that left-auxiliary trees that are not used for sister adjunction are susceptible to this behavior by adjoining the shared unbracketed syntax onto the NP dominating the bracketed text.
contrasting
train_2344
Adaptation of discriminative learning methods for these types of features to statistical machine translation (MT) systems, which have historically used idiosyncratic learning techniques for a few dense features, has been an active research area for the past half-decade.
despite some research successes, feature-rich models are rarely used in annual MT evaluations.
contrasting
train_2345
Crucially, it does not learn the basic rule → program.
the bitext5k model contains basic rules such → programme, → this programme, and → that programme.
contrasting
train_2346
In this way, syntax information can be incorporated into phrase-based SMT systems.
one disadvantage is that the reliability of the rules is often language pair dependent.
contrasting
train_2347
For the first method, we adopt the linear-chain CRFs.
even for the simple linear-chain CRFs, the complexity of learning and inference grows quadratically with respect to the number of output labels and the amount of structural features which are with regard to adjacent pairs of labels.
contrasting
train_2348
Most modern machine translation systems use phrase pairs as translation units, allowing for accurate modelling of phraseinternal translation and reordering.
phrase-based approaches are much less able to model sentence level effects between different phrase-pairs.
contrasting
train_2349
On one hand, the use of phrases can memorize local context and hence helps to generate better translation compared to word-based models (Brown et al., 1993;Och and Ney, 2003).
this mechanism requires each phrase to be matched strictly and to be used as a whole, which precludes the use of discontinuous phrases and leads to poor generalisation to unseen data (where large phrases tend not to match).
contrasting
train_2350
Our work also uses bilingual information, using the source words as part of the conditioning context.
to these approaches which primarily address the decoding problem, we focus on the learning problem of inferring alignments from parallel sentences.
contrasting
train_2351
<烧 烤 类 型 的, grill-type>, for "家" and "点 的" are wrongly aligned to "grill-type").
our model better aligns the function words, such that many more useful phrase pairs can be extracted, i.e., <在, 'm>, <找, looking for>, <烧烤 类型, grill-type> and their combinations with neighbouring phrase pairs.
contrasting
train_2352
MNB uses add-1 smoothing to estimate the conditional probability of the word "resources" in each class as θ + w = 1+1 216+33504 = 5.93e-5, and implying that "resources" is a negative indicator of the Earnings class.
this estimate is inaccurate.
contrasting
train_2353
SFE has the same scalability advantages as MNB-FM.
unlike our approach, SFE does not compute maximumlikelihood estimates using the marginal statistics as a constraint.
contrasting
train_2354
But obtaining parallel data is an expensive process and not available for all language pairs or domains.
monolingual data (in written form) exists and is easier to obtain for many languages.
contrasting
train_2355
The MT literature does cover some prior work on extracting or augmenting partial lexicons using non-parallel corpora (Rapp, 1995;Fung and McKeown, 1997;Koehn and Knight, 2000;Haghighi et al., 2008).
none of these methods attempt to train end-to-end MT models, instead they focus on mining bilingual lexicons from monolingual corpora and often they require parallel seed lexicons as a starting point.
contrasting
train_2356
(2010) presented algorithms and data structures that allow number-range queries for searching documents.
these studies do not interpret the quantity (e.g., 3,000,000,000) of a numerical expression (e.g., 3b people), but rather treat numerical expressions as strings.
contrasting
train_2357
(2007) and Davidov and Rappoport (2010) rely on hand-crafted patterns (e.g., "Object is * [unit] tall"), focusing on a specific set of numerical attributes (e.g., height, weight, size).
this study can handle any kind of target and situation that is quantified by numbers, e.g., number of people facing a water shortage.
contrasting
train_2358
(2011) designed hand-crafted rules for matching intervals expressed by temporal expressions.
these studies do not necessarily focus on semantic processing of numerical expressions; thus, these studies do not normalize units of numerical expressions nor make inferences with numerical common sense.
contrasting
train_2359
The underlying assumption of this approach is that the real distribution of a query (e.g., money given to a friend) can be approximated by the distribution of numbers co-occurring with the context (e.g., give and friend) on the Web.
the context space generated in Section 4.2 may be too sparse to find numbers in the database, especially when a query context is fine-grained.
contrasting
train_2360
Generative probabilistic models have been used for content modelling and template induction, and are typically trained on small corpora in the target domain.
vector space models of distributional semantics are trained on large corpora, but are typically applied to domaingeneral lexical disambiguation tasks.
contrasting
train_2361
Thus, the only way these terms could be recognized as positive is if they are found in the GIZA++ dictionaries.
due to data sparsity in these dictionaries this did not happen in these cases.
contrasting
train_2362
A plausible reason for such a performance improvement is the reduction in data sparsity.
such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering.
contrasting
train_2363
WordNets are primarily used to address the problem of word sense disambiguation.
at present there are many NLP applications which use WordNet.
contrasting
train_2364
This might have had a degrading effect on the SA accuracy.
it was seen that classifier developed on cluster features based on syntagmatic analysis do not suffer from this.
contrasting
train_2365
The approach presented here for CLSA will still require a parallel corpora.
the size of the parallel corpora required for CLSA can considerably be much lesser than the size of the parallel corpora required to train an MT system.
contrasting
train_2366
Unlike a classifier, MATCHER does not output any single matching M .
downstream applications can easily convert MATCHER's output into a matching M by, for instance, selecting the top K candidate r T values for each r D , or by selecting all (r T , r D ) pairs with a score over a chosen threshold.
contrasting
train_2367
Hence, the CVG builds on top of a standard PCFG parser.
many parsing decisions show fine-grained semantic factors at work.
contrasting
train_2368
Note that any PCFG, including latent annotation PCFGs (Matsuzaki et al., 2005) could be used.
since the vectors will capture lexical and semantic information, even simple base PCFGs can be substantially improved.
contrasting
train_2369
For dialog state tracking, most commercial systems use hand-crafted heuristics, selecting the SLU result with the highest confidence score, and discarding alternatives.
statistical approaches compute a posterior distribution over many hypotheses for the dialog state.
contrasting
train_2370
The model is based on M + K feature functions.
unlike in traditional maximum entropy models such as the fixed-position model above, these features functions are dynamically defined when presented with each turn.
contrasting
train_2371
Recently developed statistical approaches are promising as they fully utilize the dialog history, and can incorporate priors from past usage data.
existing methodologies are either limited in their accuracy or their coverage, both of which hamper performance.
contrasting
train_2372
Furthermore, (Blair-Goldensohn, 2007) improved previous work with the use of parameter optimization, topic segmentation and syntactic parsing.
(Sporleder and Lascarides, 2008) showed that the training model built on a synthetic data set, like the work of (Marcu and Echihabi, 2002), may not be a good strategy since the linguistic dissimilarity between explicit and implicit data may hurt the performance of a model on natural data when being trained on synthetic data.
contrasting
train_2373
Very recently, (Hernault et al., 2011) introduced a semi-supervised work using structure learning method for discourse relation classification, which is quite relevant to our work.
they performed discourse relation classification on both explicit and implicit data.
contrasting
train_2374
On one hand, since building a hand-annotated implicit discourse relation corpus is costly and time consuming, most previous work attempted to use synthetic implicit discourse examples as training data.
(Sporleder and Lascarides, 2008) found that the model trained on synthetic implicit data has not performed as well as expected in natural implicit data.
contrasting
train_2375
This indicates that straightly using synthetic implicit data as training data may not be helpful.
as shown in Section 1, we observe that in some cases explicit discourse relation and implicit discourse relation can express the same meaning with or without a discourse connective.
contrasting
train_2376
we could use forwards-backwards algorithm for exact inference in this model (Sutton and McCallum, 2012).
forwards-backwards on a sequence containing T units costs where M is the number of relations in our relation set.
contrasting
train_2377
Each node is seen two times so the time complexity is linear in the number of nodes which is at least O(2 n ).
only nodes that have encountered at least one training instance are useful and there are O(n × k) such nodes (where k the size of the training set).
contrasting
train_2378
Dependency-based techniques can also be highly effective for ad hoc information retrieval (IR) (Park et al., 2011).
few path-based methods have been explored for ad hoc IR, largely because parsing large document collections is computationally prohibitive.
contrasting
train_2379
Syntactic language models for IR are a significant departure from this trend (Gao et al., 2004;Lee et al., 2006;Cai et al., 2007;Maisonnasse et al., 2007) queries and documents to parent-child relations.
(Park et al., 2011) present a quasisynchronous translation model for IR that does not limit paths.
contrasting
train_2380
It also extends work for determining the variability of governor-dependent pairs (Song et al., 2008).
to this work, we apply linguistic features that are specific to catenae and dependency paths, and select among units containing more than two content-bearing words.
contrasting
train_2381
A dependency path is ordered and includes both word tokens and the relations between them.
a catena is a set of word types that may be ordered or partially ordered.
contrasting
train_2382
This enables us to explore semantic classification features and is highly accurate.
any dependency parser may be applied instead.
contrasting
train_2383
For example, if a noun is modified by two coordinated adjectives, there is a (symmetric) coordination relation between the two conjuncts and two (asymmetric) dependency relations between the conjuncts and the noun.
as there is no obvious linguistic intuition telling us which tree-shaped CS encoding is better and since the degree of freedom has several dimensions, one can find a number of distinct conventions introduced in particular dependency treebanks.
contrasting
train_2384
13 Most state-of-the-art dependency parsers can produce labeled edges.
the parsers produce only one label per edge.
contrasting
train_2385
To address the low-recall issue, recurring cue terms occurring within dictionary and encyclopedic resources can be automatically extracted and incorporated into lexical patterns (Saggion, 2004).
this approach is term-specific and does not scale to arbitrary terminologies and domains.
contrasting
train_2386
The results shows high precision in both cases.
these approaches to glossary learning extract unrestricted textual definitions from open text.
contrasting
train_2387
Similarly to our approach, they drop the requirement of a domain corpus and start from a small number of (term, hypernym) seeds.
while Doubly-Anchored Patterns have proven useful in the induction of domain taxonomies (Kozareva and Hovy, 2010a), they cannot be applied to the glossary learning task, because the extracted sentences are not formal definitions.
contrasting
train_2388
Plain annotation is the most common form of annotation and it is the one we shall focus on in this paper.
other, more complex, forms of annotation are also possible and of interest.
contrasting
train_2389
In other words, we face the following dilemma: • On the one hand, we should choose a small set ANN (i.e., select few annotators to base our collective annotation on), as that will allow us to increase the (average) reliability of the annotators taken into account.
• we should choose a large set ANN (i.e., select many annotators to base our collective annotation on), as that will increase the amount of information exploited.
contrasting
train_2390
12 All these efforts face the problem of how to aggregate the information provided by a group of volunteers into a collective annotation.
by and large, the emphasis so far has been on issues such as experiment design, data quality, and costs, with little attention being paid to the aggregation methods used, which are typically limited to some form of majority vote (or taking averages if the categories are numeric).
contrasting
train_2391
The English not in (6) functions as an adverbial adjunct that modifies the main verb (see top part of Figure 6) and information would be lost if this were not represented at f-structure.
the same cannot be said of the negative affix in Turkish -the morphological affix is not an adverbial adjunct.
contrasting
train_2392
The copula takes a non-finite complement whose subject is raised to the matrix clause as a non-thematic subject of the copula.
in Urdu (Figure 8), the Figure 6: Different f-structural analyses for negation (English vs. Turkish) copula is a two-place predicate, assigning SUBJ and PREDLINK functions.
contrasting
train_2393
Distributional thesauri are now widely used in a large number of Natural Language Processing tasks.
they are far from containing only interesting semantic relations.
contrasting
train_2394
The only exceptions are the P@1 values for M and WM as reference.
it should be noted that values for both MAP and R-precision, which are more reliable measures than P@1, are identical for the two thesauri and the same references.
contrasting
train_2395
(2008): they use a manually-constructed lexicon for Hebrew in order to learn an HMM tagger.
this lexicon was constructed by trained lexicographers over a long period of time and achieves very high coverage of the language with very good quality, much better than could be achieved by our non-expert linguistics graduate student annotators in just a few hours.
contrasting
train_2396
Comparing the tag dictionary entries versus the test data, precision starts in the high 80%s and falls to to the mid-70%s in all cases.
the differences in recall, shown in Table 2, are more interesting.
contrasting
train_2397
In their experiments, they-like usfound that type information is more valuable than token information.
they were able to see gains through the complementary effects of mixing type and token annotations.
contrasting
train_2398
Finally, additional raw text does improve performance.
using substantial amounts of raw text is unlikely to produce gains larger than only a few hours spent annotating types.
contrasting
train_2399
( 3) In all four examples, the verb and the participating noun phrases Mitarbeiter (employee), Kollege (colleague) and Bericht (report) are identical, and the noun phrases are assigned the same case.
given that the stemmed output of the translation does not tell us anything about case features, in order to predict the appropriate cases of the three noun phrases, we either rely on ordering heuristics (such that the nominative NP is more likely to be in the beginning of the sentence (the German Vorfeld) than the accusative or dative NP, even though all three of these would be grammatical), or we need fine-grained subcategorization information beyond pure syntax.
contrasting