id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_1400
(2009) presented methods to produce massive multilingual translation dictionaries from Web resources such as online lexicons and Wiktionaries.
while providing lexical resources on a very large scale for hundreds of thousands of language pairs, these do not encode semantic relations between concepts denoted by their lexical entries.
contrasting
train_1401
In (3), [in the park] places a well-defined situation (Yuri playing football) in a certain location.
in "The troops are based [in the park]", the same argument is obligatory, since being based requires a place to be based in.
contrasting
train_1402
Distinguishing between the two argument types has been discussed extensively in various formulations in the NLP literature, notably in PP attachment, semantic role labeling (SRL) and subcategorization acquisition.
no work has tackled it yet in a fully unsupervised scenario.
contrasting
train_1403
6 defines roles preferences local to individual arguments (r i , h i ).
an argument frame is a joint structure, with strong dependencies between arguments.
contrasting
train_1404
On the one hand, the semantic similarity between two nodes can be measured with any commonly adopted metric, such as cosine similarity and Jaccard coefficient (Baeza-Yates and Ribeiro-Neto, 1999).
the structural relation between a pair of nodes takes two forms as we have discussed earlier.
contrasting
train_1405
Nearly all semantic class taggers are trained using supervised learning with manually annotated data.
annotated data is rarely available for specialized domains, and it is expensive to obtain because domain experts must do the annotation work.
contrasting
train_1406
Supervised learning exploits manually annotated data, but must make do with a relatively small amount of training text because manual annotations are expensive.
seed-based bootstrapping exploits a small number of human-provided seeds, but needs a larger set of (unannotated) texts for training because the seeds produce relatively sparse annotations of the texts.
contrasting
train_1407
Many researchers are trying to use information extraction (IE) to create large-scale knowledge bases from natural language text on the Web.
the primary approach (supervised learning of relation-specific extractors) requires manually-labeled training data for each relation and doesn't scale to the thousands of relations encoded in Web text.
contrasting
train_1408
For each training sentence LUCHS first identifies subsequences of labeled words, and for each such labeled subsequence, LUCHS creates one or more seed phrases p. Typically, a set of seeds consists precisely of the labeled subsequences.
if the labeled subsequences are long and have substructure, e.g., 'San Remo, Italy', our system splits at the separator token, and creates additional seed sets from prefixes and postfixes.
contrasting
train_1409
Not knowing which lexicon will be most useful to the extractors, LUCHS generates several and lets the extractors learn appropriate weights.
since list similarities vary depending on the seeds, fixed thresholds are not an option.
contrasting
train_1410
The past methods either only learn parameters with one or two levels (e.g., in hierarchical Bayes), or requires significant amount of computation (e.g., in EM and in L 1 -regularized maxent), while also typically assuming a given hierarchy.
ontoUSP has to both induce the hierarchy and populate it, with potentially many levels in the induced hierarchy, starting from raw text with little user supervision.
contrasting
train_1411
On one hand, it should encourage creating abstract clusters to summarize intrinsic commonalities among the children.
this needs to be heavily regularized to avoid mistaking noise for the signal.
contrasting
train_1412
However, their agents tend to differ since in one case they are inducers, and in the other they are inhibitors.
aCTIVaTE and INDUCE share similar agents since they both signify positive regulation.
contrasting
train_1413
On the other hand, ACTIVATE and INDUCE share similar agents since they both signify positive regulation.
"activate" tends to be used more often when the patient argument is a concrete entity (e.g., cells, genes, proteins), whereas "induce" and others are also used with processes and events (e.g., expressions, inhibition, pathways).
contrasting
train_1414
Syntax based Statistical Machine Translation (SMT) systems allow the translation process to be more grammatically performed, which provides decent reordering capability.
most of the syntax based systems construct the syntactic translation rules based on word alignment, which not only suffers from the pipeline errors, but also fails to effectively utilize the syntactic structural features.
contrasting
train_1415
Monolingual Tree kernels achieve decent performance using the SSTs due to the rich exploration of syntactic information.
the sub-tree alignment task requires strong capability of discriminating the sub-trees with their roots across adjacent generations, because those candidates share many identical SSTs.
contrasting
train_1416
Other than the factor of the amount of training data, this is also because the plain features in Table 3 are not as effective as those in Table 4, since they are trained on FBIS corpus which facilitates Table 4 more with respect to the domains.
the grammatical tags and syntactic tree structures are more accurate in HIT corpus, which facilitates the performance of BTKs in Table 3.
contrasting
train_1417
aware and syntactically meaningful.
utilizing syntactic translational equivalences alone for machine translation loses the capability of modeling non-syntactic phrases (Koehn et al., 2003).
contrasting
train_1418
On the one hand, a good phrase pair often fails to be extracted due to a link inconsistent with the pair.
iTG pruning can be considered as phrase pair selection, and good iTG pruning like DPDi guides the subsequent iTG alignment process so that less links inconsistent to good phrase pairs are produced.
contrasting
train_1419
The major reason is that we did not perform any reordering or distorting during decoding with PTT.
in both t2s and s2t systems, the BLEU-4 score benefits of PRS were covered by the composed rules: both PTT+C S 3 and PTT+C 3 performed significant better (p < 0.01) than PTT+PRS, and there are no significant differences when appending PRS to PTT+C 3 .
contrasting
train_1420
The definition of combinatory categorial grammar (CCG) in the literature varies quite a bit from author to author.
the differences between the definitions are important in terms of the language classes of each CCG.
contrasting
train_1421
On the practical side, we have corpora with CCG derivations for each sentence (Hockenmaier and Steedman, 2007), a wide-coverage parser trained on that corpus (Clark and Curran, 2007) and a system for converting CCG derivations into semantic representations (Bos et al., 2004).
despite being treated as a single unified grammar formalism, each of these authors use variations of CCG which differ primarily on which combinators are included in the grammar and the restrictions that are put on them.
contrasting
train_1422
NN, VBZ and DT) and supertags are CCG categories.
because the Petrov parser trained on CCGbank has no notion of Penn treebank POS tags, we can only evaluate the accuracy of the supertags.
contrasting
train_1423
If we obtained perfect accuracy on our new task then we would be removing all of the categories not chosen by the parser.
parsing accuracy will not decrease since the parser will still receive the categories it would have used, and will therefore be able to form the same highest-scoring derivation (and hence will choose it).
contrasting
train_1424
As Table 6 shows, this change translates into an improvement of up to 0.75% in F-score on Section Tables 2, 4 and 6, are similar for all of the training algorithms.
the training times differ considerably.
contrasting
train_1425
Note that for some of the results presented here it may appear that the C&C parser does not lose speed when out of domain, since the Wikipedia and biomedical corpora contain shorter sentences on average than the news corpus.
by testing on balanced sets it is clear that speed does decrease, particularly for longer sentences, as shown in Table 9.
contrasting
train_1426
In short, we have shown that by combining PROMODES and PROMODES-H and finding the optimal threshold, the ensemble PROMODES-E gives better results than the individual models themselves and therefore manages to leverage the individual strengths of both to a certain extend.
can we pinpoint the exact contribution of each individual algorithm to the improved result?
contrasting
train_1427
(2009) achieve state-of-the-art accuracy.
these approaches dictate a particular choice of model and training regime.
contrasting
train_1428
(Alternately, they could keep the soft clustering, with the representation for a particular word token being the posterior probability distribution over the states.)
the CRF chunker in Huang and Yates (2009), which uses their HMM word clusters as extra features, achieves F1 lower than a baseline CRF chunker (Sha & Pereira, 2003).
contrasting
train_1429
This technique for turning a supervised approach into a semi-supervised one is general and task-agnostic.
we wish to find out if certain word representations are preferable for certain tasks.
contrasting
train_1430
We also looked at the classification accuracy for different parts of speech in Figure 5. we notice that, in the case of 10-fold cross validation, the performance is consistent across parts of speech.
when we only use 14 seeds all of which are adjectives, similar to (Turney and Littman, 2003), we notice that the performance on adjectives is much better than other parts of speech.
contrasting
train_1431
All these research works concentrated on attribute-based sentiment analysis.
the main difference with our work is that they did not sufficiently utilize the hierarchical relationships among a product attributes.
contrasting
train_1432
The algorithm H-RLS studied in (Cesa-Bianchi et al., 2006) solved a similar hierarchical classification problem as we formulated above.
the H-RLS algorithm was designed as an onlinelearning algorithm which is not suitable to be applied directly in our problem setting.
contrasting
train_1433
If d is set too small, important useful terms will be missed that will limit the performance of the approach.
if d is set too large, the computing efficiency will be decreased.
contrasting
train_1434
Since it is a discriminative approach it is amenable to feature engineering, but needs to be retrained and tuned for each task.
generative models produce complete probability distributions of the data, and hence can be integrated with other systems and tasks in a more principled manner (see Sections 4.2.2 and 4.3.1).
contrasting
train_1435
Subjects are not distinguished from objects and nouns may not be actual arguments of the verb.
it is a simple baseline to implement with these freely available counts.
contrasting
train_1436
Ideally, we would produce all possible segmentations and alignments during training.
this has been shown to be infeasible for real-world data .
contrasting
train_1437
Previous attempts have dealt with the overfitting problem by limiting the maximum phrase length (DeNero et al., 2006;Marcu and Wong, 2002) and by smoothing the phrase probabilities by lexical models on the phrase level (Ferrer and Juan, 2009).
(DeNero et al., 2006) experienced similar over-fitting with short phrases due to the fact that the same word sequence can be segmented in different ways, leading to specific segmentations being learned for specific training sentence pairs.
contrasting
train_1438
As discussed earlier, the run-time requirements for computing all possible alignments is prohibitive for large data tasks.
we can approximate the space of all possible hypotheses by the search space that was used for the alignment.
contrasting
train_1439
We will refer to this alignment as the full alignment.
to the method described in Section 4.1, phrases are weighted by their posterior probability in the word graph.
contrasting
train_1440
For example, as suggested in (Liang et al., 2008), adjacent labels do not provide strong information in POS tagging.
the applicability of this idea to other NLP tasks is still unclear.
contrasting
train_1441
In general, approximate algorithms have the advantage of speed over exact algorithms.
both types of algorithms are still widely adopted by practitioners, since exact algorithms have merits other than speed.
contrasting
train_1442
In HMMs, the score function can be written as In perceptrons, on the other hand, it is given as where we explicitly distinguish the unigram feature function φ 1 k and bigram feature function φ 2 k .
comparing the form of the two functions, we can see that our discussion on HMMs can be extended to perceptrons by substituting implementing the perceptron algorithm is not straightforward.
contrasting
train_1443
We consider this is because the transition information is crucial for the two tasks, and the assumption behind CARPEDIEM is violated.
the proposed algorithms performed reasonably well for all three tasks, demonstrating the wide applicability of our algorithm.
contrasting
train_1444
Our objective is to find the smallest supertag grammar (of tag bigram types) that explains the entire text while obeying the lexicon's constraints.
the original IP method of Ravi and Knight (2009) is intractable for supertagging, so we propose a new two-stage method that scales to the larger tagsets and data involved.
contrasting
train_1445
For the CCGbank test data, MIN1 yields 2530 tag bigrams.
a second stage is needed since there is no guarantee that G min1 can explain the test data: it contains tags for all word bigram types, but it cannot necessarily tag the full word sequence.
contrasting
train_1446
IP-minimization identifies a smaller set of tags that better matches the gold tags; this emerges because other determiners and prepositions evoke similar, but not identical, supertags, and the grammar minimization pushes (but does not force) them to rely on the same supertags wherever possible.
the proportions are incorrect; for example, the tag assigned most frequently to in is ((S\NP)\(S\NP))/NP though (NP\NP)/NP is more frequent in the test set.
contrasting
train_1447
An important property of CRFs is their ability to handle large and redundant feature sets and to integrate structural dependency between output labels.
even for simple linear chain CRFs, the complexity of learning and inference This work was partly supported by ANR projects CroTaL (ANR-07-MDCO-003) and MGA (ANR-07-BLAN-0311-02).
contrasting
train_1448
Using only unigram features {f y,x } (y,x)∈Y ×X results in a model equivalent to a simple bag-of-tokens positionby-position logistic regression model.
bigram features {f y ,y,x } (y,x)∈Y 2 ×X are helpful in modelling dependencies between successive labels.
contrasting
train_1449
Similar dominance mechanisms have been employed in various tree description formalisms (Rambow et al., 1995;Rambow et al., 2001;Candito and Kahane, 1998;Kallmeyer, 2001; Guillaume and Perrier, 2010) and TAG extensions (Becker et al., 1991;Rambow, 1994a).
the prime motivation for this survey is another grammatical formalism defined in the same article: multiset-valued linear indexed grammars (Rambow, 1994b, MLIGs), which can be seen as a low-level variant of UVG-dls that uses multisets to emulate unfulfilled dominance links in partial derivations.
contrasting
train_1450
Production factorization is very similar to the reduction of a context-free grammar production into Chomsky normal form.
in the LCFRS case some productions might not be reducible to r = 2, and the process stops at some larger value for r, which in the worst case might as well be the rank of the source production (Rambow and Satta, 1999).
contrasting
train_1451
This is usually done by exploiting standard dynamic programming techniques; see for instance (Seki et al., 1991).
1 the polynomial degree in the running time is a monotonically strictly increasing function that depends on both the rank and the fan-out of the productions in the grammar.
contrasting
train_1452
Apart from the core rules given in Figure 1, some versions of CCG also use rules derived from the S and T combinators of combinatory logic, called substitution and type-raising, the latter restricted to the lexicon.
since our main point of reference in this paper, the CCG formalism defined by Vijay-Shanker and Weir (1994), does not use such rules, we will not consider them here, either.
contrasting
train_1453
We have followed Hockenmaier and Young (2008) in classifying instances of generalized forward composition as harmonic if the innermost slash of the secondary argument is forward and crossed if it is backward.
generalized forward com-position is sometimes only accepted as harmonic if all slashes of the secondary argument are forward (see e.g.
contrasting
train_1454
Lapata and Barzilay (2005) and Barzilay and Lapata (2008) both show the effectiveness of entity-based coherence in evaluating summaries.
fewer than five automatic summarizers were used in these studies.
contrasting
train_1455
Since abstractive summaries would have markedly different properties from extracts, it would be interesting to know how well these sets of features would work for predicting the quality of machineproduced abstracts.
since current systems are extractive, such a data set is not available.
contrasting
train_1456
In other words, if a node is not part of the context itself, we assume it has no effect on its neighbors' classes.
if i is in class 1 its belief about its neighbor j is determined by their mutual lexical similarity.
contrasting
train_1457
If this similarity is close to 1 it indicates a stronger tie between i, j.
if i, j are not similar, i's probability of being in class 1, should not affect that of j's.
contrasting
train_1458
They propose a scheme which first identifies and assigns categories to the opinion segments as reporting, judgment, advice, or sentiment; and then links the opinion segments with each other via rhetorical relations including contrast, correction, support, result, or continuation.
in contrast to our scheme and other schemes, instead of marking expression boundaries without any restriction they annotate an opinion segment only if it contains an opinion word from their lexicon, or if it has a rhetorical relation to another opinion segment.
contrasting
train_1459
Later on, it may also include its own set of resources specifically engineered for the target language as a performance improvement.
keeping the systems up-to-date would require as much effort as the number of languages.
contrasting
train_1460
Otherwise, it is not actionable.
quality-prediction or confidence estimation at sentence-or word-level fits best a scenario in which automated translation is only a part of a larger pipeline.
contrasting
train_1461
In the experiments presented in this paper, we use BLEU scores (Papineni et al., 2002) as training labels.
they can be substituted with any of the proposed MT metrics that use human-produced references to automatically as-sess translation quality (Doddington, 2002;Lavie and Agarwal, 2007).
contrasting
train_1462
(Specia et al., 2009a) and (Specia et al., 2009b).
to date most of the research has focused on better confidence measures for MT, e.g.
contrasting
train_1463
Our research is more similar in spirit to the third strand.
we use outputs and features from the TM explicitly; therefore instead of having to solve a regression problem, we only have to solve a much easier binary prediction problem which can be integrated into TMs in a straightforward manner.
contrasting
train_1464
With this boost, the precision of the baseline system can reach 0.85, demonstrating that a proper thresholding of fuzzy match scores can be used effectively to discriminate the recommendation of the TM hit from the recommendation of the SMT output.
using the TM information only does not always find the easiest-to-edit translation.
contrasting
train_1465
fuzzy match score is 0.7 or more).
a misleading SMT output should not be recommended if there exists a poor but useful TM match (e.g.
contrasting
train_1466
For example, Chinese NE boundaries are especially difficult to identify because Chinese is not a tokenized language.
english Ne boundaries are easier to identify due to capitalization clues.
contrasting
train_1467
[ ] Lastly, the bilingual type re-assignment factor proposed in Eq (2) is derived as follows: As Eq (4) shows, both the Chinese initial NE type and English initial NE type are adopted to jointly identify their shared NE type RType .
the monolingual candidate certainty factors in Eq (2) indicate the likelihood that a re-generated NE candidate is the true NE given its originally detected NE.
contrasting
train_1468
This is due to the fact that maximizing likelihood does not imply minimizing the error rate.
with additional mapping constraints from the aligned sentence of another language, the alignment module could guide the searching process to converge to a more desirable point in the parameter space; and these additional constraints become more effective as the seed-corpus gets smaller.
contrasting
train_1469
Comparing one thing with another is a typical part of human decision making process.
it is not always easy to know what to compare and what are the alternatives.
contrasting
train_1470
The same techniques can be applied to comparative question identification and comparator mining from questions.
their methods typically can achieve high precision but suffer from low recall (Jindal and Liu, 2006b) (J&L).
contrasting
train_1471
Of the taxonomies presented by purely linguistic studies, our categories are most similar to those proposed by Warren (1978), whose categories (e.g., MATERIAL+ARTEFACT, OBJ+PART) are generally less ambiguous than Levi's.
to studies that claim the existence of a relatively small number of semantic relations, Downing (1977) presents a strong case for the existence of an unbounded number of relations.
contrasting
train_1472
We tested the relation set with an initial inter-annotator agreement study (our latest interannotator agreement study results are presented in Section 6).
the mediocre results indicated that the categories and/or their definitions needed refinement.
contrasting
train_1473
This idea is a modification of the selectional preference view of Wilks.
by using bigram counts over verb-noun pairs Krishnakumaran and Zhu (2007) loose a great deal of information compared to a system extracting verb-object relations from parsed text.
contrasting
train_1474
The latest developments in the lexical acquisition technology will in the near future enable fully automated corpusbased processing of metaphor.
there is still a clear need in a unified metaphor annotation procedure and creation of a large publicly available metaphor corpus.
contrasting
train_1475
Parsing is a sentence level processing.
in many cases two discourse arguments do not occur in the same sentence.
contrasting
train_1476
Simple-Expansion Min-Expansion could, to some degree, describe the syntactic relationships between the connective and arguments.
the syntactic properties of the argument pair might not be captured, because the tree structure surrounding the argument is not taken into consideration.
contrasting
train_1477
Because a named entity should correspond to a node in the parse tree, strong evidence about either aspect of the model should positively impact the other aspect.
designing joint models which actually improve performance has proven challenging.
contrasting
train_1478
Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001;Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains.
there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank.
contrasting
train_1479
Dependency elements with frequency below the lowest threshold have lower attachment scores (66.6% vs. 90.1% LAS), showing that simply using a complete rule helps sort dependencies.
frequency thresholds have fairly low precision, i.e., 33.4% at their best.
contrasting
train_1480
To obtain diversified translation outputs, most of the current system combination methods require multiple translation engines based on different models.
this requirement cannot be met in many cases, since we do not always have the access to multiple SMT engines due to the high cost of developing and tuning SMT systems.
contrasting
train_1481
Following Macherey and Och (2007)'s work, proposed a feature subspace method to build a group of translation systems from various different sub-models of an existing SMT system.
's method relied on the heuristics used in feature sub-space selection.
contrasting
train_1482
Our formulation can be taken as a special instance of the structural learning framework in (Tsochantaridis et al., 2005).
they concentrate on more complicated label structures as for sequence alignment or parsing.
contrasting
train_1483
As shown earlier, for those two setups the structural SVMs perform better than the flat approach.
for the tree hierarchies of Brown that we deformed or flattened, and also BNC and Syracuse, either or both of the two balance scores tend to be lower, and no improvement has been obtained over the flat approach.
contrasting
train_1484
This may indicate that a further exploration of the relation between tree balance and the performance of structural SVMs is warranted.
high visual balance and distribution scores do not necessarily imply high performance of structural SVMs, as very flat trees are also visually very balanced.
contrasting
train_1485
It keeps an alignment score that is over 0.99 (the maximum is 1.00) toward the plen matrix, and still has visible block patterns.
the IC-jng-word-bnc significantly adjusts the distance entries, has a much lower alignment score with the plen matrix, and doesn't reveal apparent blocks.
contrasting
train_1486
Together, the SMS corpus and its transcription constitute parallel corpora aligned at the message-level.
in order to learn pieces of knowledge from these corpora, we needed a string alignment at the character-level.
contrasting
train_1487
This is the case, for instance, for the SMS form kozer ([koze]) and its standard transcription causé ("talked"), as illustrated by Figure 2.
from a linguistic standpoint, alignment (1) is preferable, because corresponding graphemes are aligned on their first character.
contrasting
train_1488
Evaluated by ten-fold cross-validation, the system seems efficient, and the performance in terms of BLEU score and WER are quite encouraging.
the SER remains too high, which emphasizes the fact that the system needs several improvements.
contrasting
train_1489
The discriminative approach (Table 3) is flexible enough to utilize all kinds of alignments.
the M-M models perform clearly better than 1-1 models.
contrasting
train_1490
Most current event extraction systems rely on local information at the phrase or sentence level.
this local context may be insufficient to resolve ambiguities in identifying particular types of events; information from a wider scope can serve to resolve some of these ambiguities.
contrasting
train_1491
However, it does not recognize the trigger "stepped" (trigger of End-Position) because in the training corpus "stepped" does not always appear as an End-Position event, and local context does not provide enough information for the MaxEnt model to tag it as a trigger.
in the document that contains related events like Start-Position, "stepped" is more likely to be tagged as an End-Position event.
contrasting
train_1492
Events co-occurring with die events with conditional probability > 10% As there are 33 subtypes, there are potentially 33⋅32/2=528 event pairs.
only a few of these appear with substantial frequency.
contrasting
train_1493
We will not use information in the conflict table to infer the event type or argument/roles for other event mentions, because we cannot confidently resolve the conflict.
the event type and argument/role assignments in the conflict table will be included in the final output because the local confidence for the individual assignments is high.
contrasting
train_1494
The dialogue should be easy and intuitive, otherwise the user will not find it worth the effort and instead prefer to use manual controls or to speak to a human.
when designing an in-vehicle dialogue system there is one more thing that needs to be taken into consideration, namely the fact that the user is performing an additional, safety critical, task -driving.
contrasting
train_1495
Instead, the dialogue system should be able to either pause until the workload is low or change topic and/or domain, and then resume where the interruption took place.
resumption of an interrupted topic needs to be done in a way that minimizes the risk that the cognitive workload increases again.
contrasting
train_1496
Traditional accounts of learning typically rely on linguistic annotation (Zettlemoyer and Collins, 2009) or word distributions (Curran, 2003).
we present an apprenticeship learning system which learns to imitate human instruction following, without linguistic annotation.
contrasting
train_1497
The vectors in different languages are first mapped to a common space using an initial bilingual dictionary, and then compared.
there is no previous work that uses the VSM to compute sense similarity for terms from parallel corpora.
contrasting
train_1498
1 There has been a lot of work (more details in Section 7) on applying word sense disambiguation (WSD) techniques in SMT for translation selection.
wSD techniques for SMT do so indirectly, using source-side context to help select a particular translation for a source rule.
contrasting
train_1499
It could be that the gains we obtained come simply from biasing the system against such rules.
the results in table 6 show that this is unlikely to be the case: features that just count context words help very little.
contrasting