id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8400
The classification accuracy at 10K is 75.07%.
after that, the pace of improvement is very gradual.
contrasting
train_8401
For supervised methods, summarization is often regarded as a classification task or a sequence labeling task at sentence level, and many supervised learning algorithms have been investigated including Hidden Markov Models (Conroy and O'leary, 2001), Support Vector Regression (You et al., 2011), Factor Graph Model (Yang et al., 2011), etc.
such a supervised learning paradigm often requires a large amount of labeled data, which are not available in most cases.
contrasting
train_8402
(Wang et al., 2011(Wang et al., , 2012 proposed a Bayesian interpretation to assess tweet credibility.
it remains as a preliminary approach due to the linear assumption made in the iterative algorithm of the basic fact-finding scheme.
contrasting
train_8403
Combinatory Categorial Grammar (CCG) is an expressive grammar formalism which is able to capture long-range dependencies.
building large and wide-coverage treebanks for CCG is expensive and time-consuming.
contrasting
train_8404
the CCGbank (Hockenmaier and Steedman, 2007).
building large and wide-coverage treebanks for CCG is expensive and time-consuming.
contrasting
train_8405
The generative process generates grammar rules independently given their parents, without regard to the lexical information.
the constituents and contexts have been proven useful for grammar induction (Klein and Manning, 2002;Headden III et al., 2009).
contrasting
train_8406
The lexicon for atomic categories remains fixed after the initial lexicon (0) has been created.
the POS tags may acquire more syntactic categories in the lexicon generation stage.
contrasting
train_8407
One additional induction step they used is the "derived" induction step, in which adjacent constituents that can be derived from the existing lexicon are combined.
their experiments do not show significant improvement of this lexicon generation method, so we omit this step in our experiments.
contrasting
train_8408
For instance, for span 〈2, 4〉 of tree in Figure 1, we model P(VBD_RB|S) rather than P(VBD_RB|B).
this category-dependent boundary model performs poor in experiments (not reported in this paper).
contrasting
train_8409
This practice may reduce data sparsity problem caused by directly modelling words.
this may also lose useful lexical information.
contrasting
train_8410
The graph sequence d is represented by The order of graphs g ( j) represents the order of the graphs in an observed sequence.
the order of graphs g ( j,k) is the order of graphs in the artificial graph sequences, and there can be various artificial graph sequences between graphs g ( j) and g ( j+1) .
contrasting
train_8411
Regarding inference, our method may seem related to the coreference resolution research based on Markov Logic Networks (MLNs) (Poon and Domingos, 2008;Song et al., 2012).
previous MLN-based work on coreference resolution does not incorporate inference rules based on world knowledge.
contrasting
train_8412
Traditional general readability methods (Dubay, 2004) have been applied to several problem tasks such as matching books with grade levels (Collins-Thompson and Callan, 2005;Fry, 1969).
the problem of readability has not been well explored in Information Retrieval (IR) (Kim et al., 2012).
contrasting
train_8413
The Flesch reading ease score for this sentence is 62.11, which according to the score is not a difficult sentence.
the sentence carries a deep technical meaning which requires domain-specific knowledge for proper comprehension.
contrasting
train_8414
In (Nakatani et al., 2010) the authors used Wikipedia to build a list of some technical terms.
our proposed framework in this paper does not require an ontology or seed concepts, which can be regarded as a major innovation.
contrasting
train_8415
Language modeling approaches cannot capture domain-specific concepts in a domain (Zhao and Kan, 2010).
our method does not need any annotated data.
contrasting
train_8416
Hence a document comprising of many domain-specific terms will be difficult to read and also if the terms are not related to each other (low cohesion) in the same document then the reader will face difficulties in relating different concepts of a domain (Yan et al., 2006).
the computation of document scope and cohesion in (Yan et al., 2006) is accomplished using an ontology tree which requires an ontology for every domain.
contrasting
train_8417
In (Park et al., 2002), they named a domain-specific term extraction scheme as Degree of Domain-specificity.
their method deals with a completely different problem task.
contrasting
train_8418
• Wordnet-head-hypernyms-match: Some closely related words (like epistaxis and hemorrhage) do not share any common synset.
if we consider the parents (or hypernyms) of the synsets of such words in the Wordnet hierarchy, we can see that the two words are similar.
contrasting
train_8419
Thus, there are clear potential benefits to fine-grained citation analysis; and a number of case studies have been published that demonstrate this potential (Nanba et al., 2004;Teufel et al., 2006b) [CEPF] .
fine-grained citation analysis is currently not widely used in applications that access and analyze the scientific literature.
contrasting
train_8420
This means that the development cycle for a citation classifier must be started from scratch for each new application.
to this prior work, we base our work on a standard classification scheme for citations from information science, the classification scheme of Moravcsik and Murugesan (1975) [CERF] (henceforth MM).
contrasting
train_8421
The NER features we extract are related only to the NLP domain.
this approach for acquiring named entities is not domain dependent and can be used to develop a reasonably efficient NER system using lists of tools or resources from any domain.
contrasting
train_8422
Such syntax-based SMT systems can automatically extract larger rules, and learn syntactic reorderings for translation (Yamada and Knight, 2001;Venugopal and Zollmann, 2006;Galley et al., 2004;Chiang, 2007;Zollmann et al., 2008;DeNero et al., 2009;Shen et al., 2010;Genzel, 2010).
many problems remain unsolved.
contrasting
train_8423
Our syntax-driven approach to rule extraction is inspired by (Chiang, 2007(Chiang, , 2010, while the canonical grammar approach is based on (Galley et al., 2004(Galley et al., , 2006.
we induce synchronous graph grammars between surface form and meaning representation, instead of transfer rules between source and target form.
contrasting
train_8424
As with other translation work using synchronous tree grammars, such as synchronous TSG (Chiang, 2010) and synchronous TAG (DeNeefe and , our SHRGs can also be applied in both directions.
none of these SMT approaches use an intermediate semantic representation.
contrasting
train_8425
This generates a correct statement, knowledge of which is the point of the question.
if (7b) is used as an inverted question, "Spain" is returned by DeepQA as the first answer.
contrasting
train_8426
This function enables visually impaired people to use e-mail, read news, view Web pages, and operate other complex applications.
the existing screen readers use some distinctive explanations that do not make a target kanji easily identifiable, such as "aya for 1 aya-ori ( , twill)" for " (aya)."
contrasting
train_8427
Since the remaining two identified the correct kanji by using distinctive explanations generated by the second step, we confirmed the effectiveness of the proposed two-step method.
only six subjects identified kanji from distinctive explanations of the screen reader.
contrasting
train_8428
For " (kei)," our system used ambiguous words, and most subjects failed to identify the kanji 7 .
the screen reader achieved a high identification rate for this kanji, by using the distinctive explanation "Write on top of (tsu-chi, soil)."
contrasting
train_8429
The rule-based system described in Section 2.3 achieves good results.
it, like other rule-based methods, has some shortcomings.
contrasting
train_8430
In some cases (Lemma, Phenotype candidates, NP start and NP end, and Physiology) ignoring the feature causes small improvement in precision or recall.
the F-score is always less than the F-score of final results.
contrasting
train_8431
NP boundaries and semantic types are features used by both methodologies, so the errors made by MetaMap have effects on the performance of each system.
the rule-based system is more dependent on MetaMap output and errors in MetaMap output changes the results completely.
contrasting
train_8432
The word "associations" is the head of this phrase, so the semantic type [Mental Process] is assigned to it and it is not tagged as a phenotype name by the rule-based system.
there are some cases in which MetaMap assigns the correct semantic type to a phrase and found a good boundary for a phenotype name but the machine learning method does not mark it as a phenotype.
contrasting
train_8433
One possible reason may be due to the generation of the Fanse scheme over the NP-enriched Penn Treebank.
the C&C parser was trained over a non-enriched version of CCGbank where all the noun phrases are right-branching.
contrasting
train_8434
We induced representations of d = 40 dimensions for input vocabularies of |V en in | = 43, 614 and |V de in | = 50, 110 words (filtering out words which occur fewer than five times in our dataset).
to speed up training, 5 we learn on a subset of training sequences choosing the 3,000 most frequent words in en and de for their output vocabularies V en out and V de out , respectively.
contrasting
train_8435
As a representative to linguistic studies on event anaphora, Asher (1993) proposed a discourse representation theory to resolve the references to events.
no computational system was proposed in his work.
contrasting
train_8436
On one hand, since the antecedent candidate is an event trigger and the anaphor is a pronoun, both carry little obvious information about their own.
the event anaphor and candidate pair in event pronoun resolution consists of a predicate and a pronoun.
contrasting
train_8437
For example, one day cricket match is <<one-day>-<cricket-match >> and South Indian Association is <<South-Indian>-Association>.
since the surface forms for adjectival usage and compounding forms in English being the same, one may have ambiguous expressions such as South sea route.
contrasting
train_8438
a 'tree' and sākhā 'branch' is part-of (avayava-avayavi).
in all the three cases instead of specifying these deeper relations, relation between the components is expressed through the genitive case suffix in the paraphrase of these compounds as rājñah .
contrasting
train_8439
In case of English-Hindi language pairs, it was observed that in 59% of cases an English Noun compound can be translated into genitive construction in Hindi (Paul et al., 2010).
for other NLP tasks such as information extraction, question answering etc., genitive relation will not be sufficient and one needs to look for deeper semantic relation.
contrasting
train_8440
In recent years, the research field of sentiment analysis has focused on analyzing this form of textual-information, particularly opinions or sentiments expressed by internet users.
given the international nature of the web and online shopping, opinions in a user's mother language may not be available.
contrasting
train_8441
One way to solve the WTD problem is to calculate the sum of association scores of pairs among translation of the target word and all its surrounding words' translations, and then select the one with the highest score.
since the different surrounding words have different amounts of influence on the target word, it is necessary to add some weighting factors (e.g., word distance).
contrasting
train_8442
These approaches rely on large parallel corpora to train a WSD classifier.
for some language pairs (e.g., Japanese-Chinese), such corpora are not available.
contrasting
train_8443
That is why DeSCoG+ does not perform significantly better than DeSCoG.
the fact that the recall of DeSCoG is nearly unchanged when the training data size is over 5000 suggests that there are some structures that the alignment phase missed even when more training data were used.
contrasting
train_8444
Secondly, incorrect syntactic parses could lead to incorrect (or incomplete) MRs. For example, with the incorrect syntactic parse shown in Figure 8, it is very difficult to build the correct MR answer(A,(state(B),next_to(A,B),const(B,stateid(arizona)) because one of the two words state and Arizona has to receive the meaning next_to(A,B), which is very unlikely.
parsers which learn parsing syntax and semantics simultaneously could easily overcome this problem when they recognize that the word border should be a head of two dependents.
contrasting
train_8445
Looking at the sizes, the optimal seed word list length seems to be 5,000, because that is the one that obtains the largest corpus.
the type of documents from which the corpus has been built is something to take into account, which is shown in fig.
contrasting
train_8446
The optimal word-combination length to send to the APIs seems to be 2, because it obtains the largest and most varied corpus with the least number of PDFs.
if more than 100-150 million words are needed, crawling is the way to go: we have collected a corpus of a size and website variety comparable with those obtained via search engines, with much fewer PDFs and the potential to get much bigger.
contrasting
train_8447
EBMT systems can usually only handle a moderate size example base in the matching stage.
using a large example base is important to ensure high quality MT output.
contrasting
train_8448
One thus has to strive for designing a similarity function which produces rankings as close as possible to the ranking as computed by the LD computation.
this proves difficult because standard IR similarity scores work on the principle of material similarity, i.e.
contrasting
train_8449
Simple retrieval methods such as raw term frequency (tf ) or tf-idf do not perform well for ASM.
to standard IR, stopword removal and stemming decrease performance.
contrasting
train_8450
This method works well for non-standard tokens that are generated by a few simple operations such as substitution, insertion, and deletion of letters of the standard words.
it cannot handle words such as 'ate' (meaning 'eight'), where non-standard tokens are created based on the phoneme similarity with the standard words, or 'bday' (meaning 'birthday'), which involves too many operations.
contrasting
train_8451
Character-block level sequence labeling can successfully normalize non-standard tokens such as 'gf ', 'prof', and 'pro' (to 'girlfriend', 'professor', and 'problem' or 'probably').
these cases are very hard for the MT systems to tackle with.
contrasting
train_8452
One important reason is the training data size -ours is much smaller.
our system uses fewer candidates and is more efficient than (Liu et al., 2012), which considers all the English words with the same initial character.
contrasting
train_8453
We can see that the performance of the character-block level MT improves steadily as the training data grows.
the training size effect on the character-level two-step MT is rather small.
contrasting
train_8454
Under the framework of extractive summarization, it is important to acquire the relationship between sentences and aspects for sentence selection.
in most existing HDP models, the sentence level is disregarded and we cannot directly get the aspect distribution of sentences.
contrasting
train_8455
If the pseudo trigger mention is similar to one STM, their similarity will be high.
instead of calculating the average similarity, we calculate the maximum similarity to identify whether the pseudo trigger mention should be filtered out: where n is the number of STMs for the pseudo trigger mention p and ) , ( the global discrimination comes from the probability of a pseudo trigger mention belonging to the set of true trigger mentions in the training set.
contrasting
train_8456
Since domain dictionaries are frequently available for NLP related projects (e.g., technical manual translation), they can thus provide big help in real applications.
the above dictionary related feature utilized in discriminative approaches (Low et al., 2005;) cannot be directly adopted by a generative model 3 .
contrasting
train_8457
Therefore, this candidate-feature is associated with each candidate of the position-tag.
if no dictionary word covers this character, then 2 TM will be set to "Inapplicable" regardless of which tag is assigned to "学" (i.e., we do not want to disturb the original model in this case).
contrasting
train_8458
Equation (3) weighs the tag matching factor and the character-tag trigram factor equally.
it is reasonable to expect that they should be weighted differently according to their contribution.
contrasting
train_8459
Afterwards, they further proposed an integrated model to integrate generative and discriminative approaches, as these two approaches complement each other.
dictionary information has been utilized in the discriminative approach in the previous works of (Low et al., 2005;.
contrasting
train_8460
On the other hand, dictionary information has been utilized in the discriminative approach in the previous works of (Low et al., 2005;.
they focus on improving the in-domain word segmentation accuracy, while we investigate how the domain invariant feature (based on dictionary information) helps for cross-domain tasks.
contrasting
train_8461
First, we do not simply add the dictionary matching information as an additional feature under the Maximum Entropy framework.
we derive a new generative model with dictionary information starting from the problem formulation, and solve the problem in a principled way.
contrasting
train_8462
Recent study shows that parsing accuracy can be largely improved by the joint optimization of part-of-speech (POS) tagging and dependency parsing.
the POS tagging task does not benefit much from the joint framework.
contrasting
train_8463
(Hatori et al., 2011) propose the first transition-based joint model for Chinese POS tagging and unlabeled dependency parsing and gain large improvement in the parsing accuracy.
their joint models only slightly improve the tagging accuracy over a sequential tagging model.
contrasting
train_8464
The representative methods include TagLDA (Krestel et al., 2009;Si and Sun, 2009) and Content Relevance Model (CRM) (Iwata et al., 2009).
these methods usually suffer from the over-generalization problem.
contrasting
train_8465
WAM-based methods, formalizing trigger probabilities at word level, consider each single word in document and project from document content to keyphrases.
the coverage of document themes should be appreciated at topic level, which is beyond the power of WAM-based methods.
contrasting
train_8466
On the one hand, parsing barely based on bi-lexical features without POS tags hurt performance dramatically.
if we tag the whole sentence first and then do dependency parsing, then syntactic features which are demonstrated to be effective for tagging disambiguation cannot be utilized.
contrasting
train_8467
These results demonstrate that our training method can bias the joint model towards the desired task.
as we try different losses, tagging accuracy rarely changes.
contrasting
train_8468
This may due to the fact that our joint decoder is deterministic thus suffers more from error propagation comparing with beam search based or dynamic programming based decoders.
our joint method can also be enhanced with beam search and we leave it to future work.
contrasting
train_8469
Reading is a common activity that is part of the process of information transfer between humans.
despite of the important role it has played in recent history and its current wide use, this process of information transfer is not well understood.
contrasting
train_8470
For example, the factors to measure success when reading a document with the objective of writing a review or preparing a presentation are clearly different.
for the sake of simplifying and unifying our method to measuring task performance, we will resort to measuring the level of understanding of subjects after reading a text.
contrasting
train_8471
Although this also involves a disambiguation of translations, their work is not directly comparable to ours, since they do not strictly use the word senses encoded in Wiktionary but define them based on the translations shared across multiple languages.
to that, we aim at exploiting a wide range of lexical-semantic knowledge and therefore need to rely on the word senses actually encoded in Wiktionary.
contrasting
train_8472
The disambiguation of translations hence seems to be more difficult for our raters.
the κ scores are well above .67 and therefore allow us to draw tentative conclusions (Artstein and Poesio, 2008).
contrasting
train_8473
Our DT-based expansion technique has no notion of dimensions since it works on the word level, and thus does not suffer from this kind of sampling error that is inevitable when representing a large vocabulary with a small fixed number of dimensions or topics.
while vector-space models do a good job at ranking candidates according to their similarity, 2 they fail to efficiently generate a top-ranked list of possible expansions: due to its size, it is infeasible to rank the full vocabulary every time.
contrasting
train_8474
The lexical expansions shown in Figure 1 were generated by the same DT used in our experiments.
for the general case, we make no assumptions about the method that generates the lexical expansions, which could just as easily come from, say, translations via bridge languages, paraphrasing systems, or lexical substitution systems.
contrasting
train_8475
Nov for November) or use of punctuation.
to their approach, where they discarded these cases, we used minimum edit distance (MED) to determine the re-generated candidate closest to the original.
contrasting
train_8476
The highest-ranked ones were believable as reliable instances of dependencies: infinitival to in VB TO seems likely to be often correct, as does existential there in the first three cases.
it is quite possible that many of the reliable ones such as PRP VBZ are actually poor at distinguishing between positive and negative examples by virtue of their frequency -they may occur equally often with both, in the same way that words like the are useless in general text classification.
contrasting
train_8477
Choosing features by a reliability-based measure did not prove useful.
this may be related to systematic choices made by the unsupervised parser that were different from the gold standard's choices, rather than bad parsing; an option for aligning systematic choices is the HamleDT approach and software of Zeman et al.
contrasting
train_8478
We employ two generative models (Sections 2.1, 2.2) to accomplish the first task of debate post classification and then generate the data for linguistic style experiments in Section 2.3.
before proceeding, we briefly review related work on debates.
contrasting
train_8479
In (Somasundaran and Wiebe, 2009), opinions/polarities which were correlated with a debate-side were used to classify a post as for or against.
this thread of research does not model agreements and disagreements in debates.
contrasting
train_8480
For most style dimensions, mutual accommodation across agreeing pairs is more than that for disagreeing pairs.
for 4 style dimensions, negation, exclusive, discrepancy, and 2nd person pronoun, we find the trend reversed (values marked in bold).
contrasting
train_8481
Theoretically, considering the following probabilistic expression: 10) when , ( ) > 0, we have the following two exclusive cases: Case 1: ( ⟵ ) ( ) > ( ⟵ ) ( ) Case 2: ( ⟵ ) ( ) > ( ⟵ ) ( ).
both cannot happen simultaneously.
contrasting
train_8482
For span i S k , only n k can interact with the outside of the span: n k can be modified by n h (h < i), and n k can modify n l (k < l).
n j (i ≤ j < k) cannot be modified by n h or cannot modify n l .
contrasting
train_8483
They have proven effective as components of a wide range of NLP applications, and in the modelling of cognitive operations such as judgements of word similarity (Sahlgren, 2006;Turney and Pantel, 2010; Baroni and Lenci, 2010), and the brain activity elicited by particular concepts (Mitchell et al., 2008).
with few exceptions (e.g.
contrasting
train_8484
For example, it will not recognize the following coordination: eksportowane do Niemiec g en i na Litwę acc 'exported to Germany and to Lithuania'.
it excludes the majority of cases where two terms preceded by prepositions are separated by a comma and they belong to two different parts of a sentence, e.g: W <Polsce> loc , mimo <wpisania pojęcia konsumenta> g en do konstytucji ... 'In [Poland], despite of [entering the notion of a consumer] into the constitution ...'.
contrasting
train_8485
Less than 10% alignment error rate (AER) for French-English has been achieved by the conventional word alignment tool GIZA++, an implementation of the alignment models called the IBM models (Brown et al., 1993), with some heuristic symmetrization rules.
for distant language pairs such as English-Japanese, the conventional alignment method is quite inadequate (achieving an AER of about 20%).
contrasting
train_8486
They are often aligned to some words incorrectly.
with the conventional model, our model derives such function words from content words in their own language.
contrasting
train_8487
One solution, adopted by almost all the existing alignment models, is to align unique function words to NULL.
it is difficult to judge whether a unique function word has to be NULL-aligned or not, and often causes alignment errors as shown in Figure 1.
contrasting
train_8488
Finally, θ T can be decomposed as: The earlier study (Nakazawa and Kurohashi, 2011) only considered treelets as alignment units.
this is inadequate for semantic-head dependency trees, since a set of sibling function words is often considered as an alignment unit.
contrasting
train_8489
NER spans are therefore a subset of syntactic constituent spans, and the two problems can be represented with a single tree-structured derivation.
a joint model of prosody and parsing would be difficult to capture with this approach, as boundaries of prosodic and syntactic spans are not required to overlap (Selkirk, 1984;Jackendoff, 2002, Ch.
contrasting
train_8490
The bracket model cannot offer competitive accuracy when compared to the other models and has limited application given its output is only a set of projective spans.
coupled with an appropriate task it does offer exceptional parsing speed, decoding length 40 sentences at more than 10 per second on our test system, and may be useful as a component in select joint modeling tasks.
contrasting
train_8491
This is somewhat surprising given the sequential constraints of our NER model, but likely also due in part to the less difficult 4-label NER task reported in previous results.
we see stronger than average gains by coupling the models and using joint inference.
contrasting
train_8492
In recent years, error mining approaches have been proposed to identify the most likely sources of errors in symbolic parsers and generators.
the techniques used generate a flat list of suspicious forms ranked by decreasing order of suspicion.
contrasting
train_8493
Like in (Koller and Striegnitz, 2002), the initial FB-LTAG is converted to a grammar of its derivation trees.
in this case, the grammar conversion and the resulting feature-based RTGs accurately translates the full range of unification mechanisms employed in the initial FB-LTAG.
contrasting
train_8494
Supertagging was shown to improve the performance of symbolic parsers and generators significantly.
it requires the existence of a treebank in a format appropriate to generate the supertagging model.
contrasting
train_8495
In sum, various symbolic and statistical techniques have been developed to improve the efficiency of grammar-based surface realisation.
statistical systems using supertagging require the existence of a treebank in an appropriate format while the purely symbolic systems described in (Carroll and Oepen, 2005;Gardent and Kow, 2005;Koller and Striegnitz, 2002;Gardent and Perez-Beltrachini, 2010) have not been evaluated on large corpora of arbitrarily long sentences such as provided by the surface realisation (SR) task (Belz et al., 2011).
contrasting
train_8496
In the latter case an unambiguous placement pattern is typical, e.g., a main verb being placed immediately behind its auxiliary.
the placement of a main verb to resemble German depends heavily on the parser accuracy (clause boundaries), and it can sometimes be ambiguous (e.g., verbs may or may not move past embedded clauses).
contrasting
train_8497
As we shall see, these structures will use hierarchies which look rather like syntactic categories, but are quite different from traditional parts of speech or other treatments.
we note that there can be many grammars that explain a given set of sentences, and the grammar that describes this particularly input may differ substantially from human-crafted grammars.
contrasting
train_8498
Note that we make no assumptions on the category of words or distinguish them as nouns/verbs.
different machine learning algorithms are required for learning perceptual objects, events, and spatial relations; thus these are distinguished in the semantic space.
contrasting
train_8499
So far, we definitively demonstrated the utility of CSI features in guided summarization.
the previous experiments made use of gold-standard, human-assigned categories for each topic, provided manually by the TAC organizers.
contrasting