id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_13900
Performance on named entities: The coreference annotation in Ontonotes 5.0 includes various types of mentions.
not all mention types are equally interesting.
contrasting
train_13901
This is a practical difficulty of spectral methods, for example to apply evaluation metrics like perplexity which are only defined for distributions.
in the previous section we have exposed that if we have access to estimates of a Hankel matrix of a WCFG G, we can recover G. the statistics in the Hankel require access to strings that have information about context-free cuts.
contrasting
train_13902
One can prove that there is theoretical identifiability of the rank and the parameters of an FST distribution, using a rank minimization formulation.
this problem is NP-hard, and it remains open whether there exists a polynomial method with identifiability results.
contrasting
train_13903
Previous studies show that joint solutions usually lead to the improvement in accuracy over pipelined systems by exploiting POS information to help word segmentation and avoiding error propagation.
traditional joint approaches usually involve a great number of features, which arises four limitations.
contrasting
train_13904
(Gao et al., 2004) described a transformationbased converter to transfer a certain annotationstyle word segmentation result to another style.
this converter need human designed transformation templates, and is hard to be generalized to POS tagging.
contrasting
train_13905
Traditionally, a popular lexical database of English is Wordnet (Miller, 1995;Miller and Fellbaum, 1998), which organizes the semantic network in terms of graph theory.
to manual approaches, the automatic analysis of semantically interesting graph structures of language has received increasing attention.
contrasting
train_13906
Our system clusters relations with similar named-entity arguments, but this means it does not cluster relations whose arguments are rarely named entities.
using crosslingual clusters of common nouns, such as those from Täckström et al.
contrasting
train_13907
Table 3: Five sarcastic tweets found by the Contrast method but not the SVM These tweets are good examples of a positive sentiment (love, enjoy, awesome, can't wait) contrasting with a negative situation.
the negative situation phrases are not always as specific as they should be.
contrasting
train_13908
Some false hits come from situations that are frequently negative but not always negative (e.g., some people genuinely like waking up early).
most false hits were due to overly general negative situation phrases (e.g., "I love working there" was labeled as sarcastic).
contrasting
train_13909
Among these fields, the Overview is required and the others are optional, such as Project, Course and Interest groups.
compared with Overview, Summary, Experience, Education fields, they seem to be less important for summarization of personal profiles.
contrasting
train_13910
MaxEnt 0.0349 CoFG 0.0383 CoFG-edu 0.0382 CoFG-exp 0.0381 Table 5: ROUGE-2 F-Measure score of the contribution of social edges From Table 5, we can see that all of our proposed approaches, i.e., CoFG-edu, CoFG-exp, and CoFG, outperform the baseline approach, i.e., MaxEnt.
the performance of CoFG-edu, CoFG-exp and CoFG are similar.
contrasting
train_13911
In most of the existing summarization systems, people need to first define a constant length to restrict all the output summaries.
in many cases it is improper to require all summaries are of the same length.
contrasting
train_13912
The Rouge evaluation requires golden standard summaries as the base.
in many cases we cannot get the reference summaries.
contrasting
train_13913
This is not an issue of concern to (Teufel and Moens, 2002), but relates to the notion of NUCLEUS and SATELLITE clauses, which form the foundation of Rhetorical Structure Theory (Mann and Thompson, 1998), and guides the summarisation paradigm of (Marcu, 1998a;Marcu, 1998b).
the difference here is that we define a-priori certain categories to be independent (have the property of playing the role of nucleus in the discourse) and specify their relation with particular types of dependent categories.
contrasting
train_13914
The experimental setup follows the paradigm of (Teufel, 2001).
while (Teufel, 2001) developed a Q-A task to evaluate summaries showing the contribution of a scientific article in relation to previous work, the purpose of the Q-A task at hand is to show the usefulness of the extracted summaries in answering questions on the paper, and how they compare to a discourse-agnostic baseline.
contrasting
train_13915
Because it needs to explore an exponentially large space in the worst case, a bounded priority queue becomes necessary to ensure limited parsing time.
huang and Sagae (2010) explore the idea of dynamic programming, which is originated in bottom-up constituent parsing algorithms like Earley (1970), but in a beam-based non best-first parser.
contrasting
train_13916
This requirement can be easily satisfied if we use a generative scoring model like PCFG.
in practice we use a MaxEnt model.
contrasting
train_13917
1 The vanilla best-first parsing algorithm inherits the optimality directly from Dijkstra algorithm.
it explores exponentially many derivations to reach the goal configuration in the worst case.
contrasting
train_13918
We reach 92.39% accuracy with structured perceptron.
in experiments we still use MaxEnt to make the comparison fair.
contrasting
train_13919
DP best-first parser is as fast as non-DP for short sentences, but the time grows significantly slower.
it explores ∼17 times more states than DP, with an unbearable average time.
contrasting
train_13920
Figure 6 (a) shows that beam parser fails to reach the optimality, while exploring significantly more states.
beam parser also fails to reach an accuracy as high as best-first parsers.
contrasting
train_13921
The problem of learning language models from large text corpora has been widely studied within the computational linguistic community.
little is known about the performance of these language models when applied to the computer vision domain.
contrasting
train_13922
Indeed, the error sparsity makes it very challenging to identify mistakes accurately, and no system in the shared task achieves a precision over 50%.
once the precision drops below 50%, the system introduces more mistakes than it identifies.
contrasting
train_13923
To compensate this deficiency, we tried combining the three lexical resources in various ways (taking the union or combining them in a pipeline using the first resource that would yield a synonym).
the results did not improve and even in some cases worsened due probably to the insufficient lexical disambiguation.
contrasting
train_13924
One could argue that a joint model is more attractive as potential antecedents such as building "trigger" subsequent bridging cases such as stairs (Example 1).
bridging can be indicated by referential patterns without world knowledge about the anaphor/antecedent NPs, as the nonsense example 2 shows: the wug is clearly a bridging anaphor although we do not know the antecedent.
contrasting
train_13925
Bridging anaphora can have almost limitless variation.
we observe that bridging anaphors are often licensed because of discourse structure Markert et al.
contrasting
train_13926
This feature set (Table 1, f 1-f 13) works well to identify old, new and several mediated categories.
it fails to recognize most bridging anaphora which we try to remedy in this work by including more diverse features.
contrasting
train_13927
They apply an expectation-maximization approach to learn how words align to elements of the target grammar, and achieve performance close to that of the rule-based systems.
their grammar does not allow for non-binary or partially lexicalized rules (e.g.
contrasting
train_13928
We can therefore apply standard parsing algorithms to this task.
we have some additional grammar requirements.
contrasting
train_13929
With a rulebased system, such a requirement translates to removing a few rules.
a ML-based approach requires a complete retrain.
contrasting
train_13930
In Figure 1, the KB graph (only solid edges) is disconnected, thereby making it impossible for PRA to discover any relationship between Alex Rodriguez and World Series.
addition of the two edges with SVO-based lexicalized syntactic edges (e.g., (Alex Rodriguez, plays for, NY Yankees)) restores this inference possibility.
contrasting
train_13931
A method of latent embedding of relation instances for sentence-level relation extraction was shown in (Wang et al., 2011).
none of this prior work makes explicit use of the background KBs as we explore in this paper.
contrasting
train_13932
OpenIE systems such as Reverb (Etzioni et al., 2011) also extract verb-anchored dependency triples from large text corpus.
to such approaches, we focus on how latent embedding of verbs in such triples can be combined with explicit background knowledge to improve coverage of existing KBs.
contrasting
train_13933
In the case study of Section 3, we use POS-based rules as hidden states.
it should be noted that the hidden structures surely do not have to be POS tags.
contrasting
train_13934
These methods were applied to not-so-large scale experiments (55 million (M) words for training their BNLMs) (Arsoy et al., 2013).
our method is applied to SMT and can be used to improve a BNLM created from 746 M words by using a CSLM trained from 42 M words.
contrasting
train_13935
Actually, a CSLM trained from a smaller corpus can improve the BLEU scores of SMT if it is used in the n-best reranking (Schwenk, 2010;Huang et al., 2013).
we will demonstrate that a BNLM simulating a CSLM can improve the BLEU scores of SMT in the first pass decoding.
contrasting
train_13936
The epoch size for MIRA 1 and MIRA 2 is 40, while the one for c-MIRA is 400. c-MIRA runs more epochs, because we update the parameters by much fewer times.
we can implement Line 3∼8 in Algorithm 1 in multi-thread (we use eight threads in the following experiments), which makes our algorithm much faster.
contrasting
train_13937
More specifically, CRFs improve the performance on monolingual posts, especially when a single word is tagged in the wrong language.
when the influence of the context is too high, CRFs reduce the performance in bilingual posts.
contrasting
train_13938
Entity linking in long text has been well studied in previous works.
few work has focused on short text such as microblog post.
contrasting
train_13939
The content of post (2) is highly related to post (1).
to the confusing post (1), the text in post (2) explicitly indicates that the Abbott here refers to the Australian political leader.
contrasting
train_13940
Furthermore, we propose a Graph-based Microblog Entity Linking (GMEL) method.
to CEMEL, the extra posts in GMEL are not directly added into the context.
contrasting
train_13941
An ideal solution is to expand the context with the posts which contain the same entity.
automatically judging whether a name mention in two documents refers to the same entity, namely cross document coreference, is not trivial.
contrasting
train_13942
This has the effect of increasing the weights of features whose likelihood of appearing in a pair of sentences is strongly influenced by the paraphrase relationship between the two sentences.
if p k = q k , then the KL-divergence will be zero, and the feature will be ignored in the matrix factorization.
contrasting
train_13943
Taking the unigram feature not as an example, we have p k = [0.66, 0.34] and q k = [0.31, 0.69], for a KL-divergence of 0.25: the likelihood of this word being shared between two sentence is strongly dependent on whether the sentences are paraphrases.
the feature then has p k = [0.33, 0.67] and q k = [0.32, 0.68], for a KL-divergence of 3.9 × 10 −4 .
contrasting
train_13944
The PMI based method has achieved promising results.
according to Kanayama's investigation, only 60% co-occurrences in the same window in Web pages reflect the same sentiment orientation (Kanayama and Nasukawa, 2006).
contrasting
train_13945
Turney (2002) chose excellent and poor as seed words.
using isolated seed words may cause the bias problem.
contrasting
train_13946
For sequential search problems, like left-to-right tagging and parsing, beam search has been successfully combined with perceptron variants that accommodate search errors (Collins and Roark, 2004; Huang et al., 2012).
perceptron training with inexact search is less studied for bottom-up parsing and, more generally, inference over hypergraphs.
contrasting
train_13947
For sequential search problems, such as tagging and incremental parsing, beam search coupled with perceptron algorithms that account for potential search errors have been shown to be a powerful combination (Collins and Roark, 2004;Daumé and Marcu, 2005;Zhang and Clark, 2008;Huang et al., 2012).
sequential search algorithms, and in particular left-to-right beam search (Collins and Roark, 2004;Zhang and Clark, 2008), squeeze inference into a very narrow space.
contrasting
train_13948
This characterization in turn suggests that predicting whether a connective should be included might be a difficult problem for an NLP system to address, since current-day systems lack the requisite world knowledge and capacity for inference that would be necessary to evaluate the ease with which coherence relations can be established on arbitrary examples.
it is also possible that the decision to include a connective depends in part on stylistic and other types of factors as well, such that there might be predictive information in the kinds of shallow linguistic and textual features that systems do have access to.
contrasting
train_13949
Relatedly, Asr and Demberg (2012b) discuss which connectives are the strongest predictors of which relation types.
there is no work of which we are aware that specifically predicts whether connectives should be used or omitted.
contrasting
train_13950
In Japanese, zero references often occur and many of them are categorized into zero exophora, in which a referent is not mentioned in the document.
previous studies have focused on only zero endophora, in which a referent explicitly appears.
contrasting
train_13951
The author and reader (A/R) of a document have not been used for contextual clues because the A/R rarely appear in the discourse in corpora based on newspaper articles, which are main targets of the previous studies.
in other domain documents such as blog articles and shopping sites, the A/R often appear in the discourse.
contrasting
train_13952
They deal with zero exophora by judging that a zero pronoun does not have anaphoricity.
the information of zero pronoun existences is given and thus they did not address zero pronoun detection.
contrasting
train_13953
Likewise, the reader is sometimes mentioned as " " (customer) and others.
since such expressions often refer to someone other than the A/R, whether an expression indicates the A/R of a document depends on the context of the document.
contrasting
train_13954
From these results, the A/R mentions including "none" can be predicted to accuracies of approximately 80%.
the recalls are not particularly high: the recall of author is 140/258 and the recall of reader is 56/105.
contrasting
train_13955
This task is much simpler than modeling a complete dialogue session (e.g., as proposed in Turing test), and probably not enough for real conversation scenario which requires often several rounds of interactions (e.g., automatic question answering system as in (Litman et al., 2000)).
it can shed important light on understanding the complicated mechanism of the interaction between an utterance and its response.
contrasting
train_13956
A drawback of this voting procedure is that the final result may not be independent of the voting order, in some cases.
it is assured that the result is consistent, i.e.
contrasting
train_13957
is not catastrophic since most of the missed reactions are tagged as continuation, which is still true (only 10% of the reaction relations are mistagged as sameevent).
there is big room for improvement on this point.
contrasting
train_13958
The existing machine learning based approaches substantially improve the robustness of scope detection, and have nearly 80% accuracy.
the approaches ignore the availability of the structured syntactic parse information.
contrasting
train_13959
Sánchez et al (2010) employed a tree kernel based classifier with CCG structures to identify speculative sentences on Wikipedia dataset.
in Sánchez's approach, not all sentences are covered by the classifier.
contrasting
train_13960
Temporal variations of text are usually ignored in NLP applications.
text use changes with time, which can affect many applications.
contrasting
train_13961
We see that both the SE kernel and a periodic kernel (PS, see below) give good results.
for extrapolation, the choice of the kernel is paramount.
contrasting
train_13962
a uni-modal burst with a steady decrease.
its uncertainty grows with for predictions well into the future.
contrasting
train_13963
In this section we demonstrate the usefulness of our method of modelling in an NLP task: predicting the hashtag of a tweet based on its text.
to this classification approach for suggesting a tweet's hashtag, information retrieval methods based on computing similarities between tweets are very hard to scale to large data (Zangerle et al., 2011).
contrasting
train_13964
Direct quotations are used for opinion mining and information extraction as they have an easy to extract span and they can be attributed to a speaker with high accuracy.
simply focusing on direct quotations ignores around half of all reported speech, which is in the form of indirect or mixed speech.
contrasting
train_13965
The features they used are: As we explained earlier the word and character edit features capture similarity between many pairs of queries.
they also tend to mis-classify many other pairs especially when the two queries share many words yet have different intents.
contrasting
train_13966
In our current implementation we combine these scores by linear combination: Other more sophisticated ways to combine text and behavior evidence are possible, such as jointly learning over both text and behavior features.
we chose to follow the simpler linear approach for interpretability of the results (e.g., by varying the λ parameter).
contrasting
train_13967
The maximum size of the tagset equals to the total generative capacity, or 3844 tags.
depending on the exact numbers.
contrasting
train_13968
1 Instead, we exploit lattices, which offer a much richer representation of the decoder output, since they compactly encode an exponential number of translation hypotheses in polynomial space.
n-best lists are typically very redundant, representing only a few combinations of top scoring arcs in the lattice.
contrasting
train_13969
Heaps of split-states 4: for all t in T do Find best final split-state 23: end for 25: return I.MAX() 26: end function Figure 2: Push-forward rescoring with a recurrent neural network language model given a beam-width for language model split-states k, decoder states V , edges E, a start state s and final states T .
a recurrent neural network language model makes much weaker independence assumptions.
contrasting
train_13970
Domain adaptation for SMT usually adapts models to an individual specific domain.
it often lacks some correlation among different domains where common knowledge could be shared to improve the overall translation quality.
contrasting
train_13971
After each iteration, feature weights from all decoders are collected (line 16-19).
to the original algorithm (Simianer et al., 2012), we only average the generaldomain feature weights w G 1 , .
contrasting
train_13972
We do not have theoretically grounded guarantee.
we observed that the BLEU score of our method on DEV data was slightly lower than that in the baseline system, which indicates the out-of-domain features are less over-fitting on the domain-specific DEV data since SOURSE A point begins with a player serving the ball.
contrasting
train_13973
(1992) and Federico (1999) explore models for combining foreground and background distributions for the purpose of language modeling, and their approaches are somewhat similar to ours.
our focus is on translation.
contrasting
train_13974
The above formulation applies whenever we have access to comparable corpora.
often we have access to comparable documents, such as those given by Wikipedia inter-language links.
contrasting
train_13975
Our science and EMEA corpora are certainly different in domain from the OLD-domain parliamentary proceedings, and our success in boosting MT performance with our methods indicates that the Wikipedia comparable corpora that we mined match those domains well.
the subtitles data differs from the OLD-domain parliamentary proceedings in both domain and register.
contrasting
train_13976
We experiment with French-English because tuning and test sets are available in several domains for that language pair.
our techniques are directly applicable to other language pairs, including those that are less related.
contrasting
train_13977
The size of a Hiero SCFG grammar is typically larger than phrase-based models extracted from the same data creating challenges in rule extraction and decoding time especially for larger datasets .
the LR-decoding algorithm could avoid these shortcomings such as faster time complexity, reduction in the grammar size and the simplified left-to-right language model scoring.
contrasting
train_13978
It might appear that the restriction that target-side rules be GNF is a severe restriction on the coverage of possible hypotheses compared to the full set of rules permitted by the Hiero extraction heuristic.
there is some evidence in the literature that discontinuous spans on the source side in translation rules is a lot more useful than discontinuous spans in the target side (which is disallowed in the GNF).
contrasting
train_13979
The Algorithm 1 presented earlier does an exhaustive search as it generates all possible partial translations for a given stack that are reachable from the hypotheses in previous stacks.
only a few of these hypotheses are retained, while majority of them are pruned away.
contrasting
train_13980
In such a way, we significantly increase the feature coverage on unseen data.
if we allow arbitrary combinations, we can extract a hexalexical feature (4 Chinese + 2 English words) for a local window in Figure 5, which is unlikely to be seen at test time.
contrasting
train_13981
Due to the use of query documents, the LSS formulation has some resemblance to document ranking based on learning to rank (Li, 2011;Liu, 2011).
lSS is very different because we turn the problem into a supervised classification problem.
contrasting
train_13982
Note that the authors here are in fact userids.
since they are randomly selected from a large number of userids, the probability that two sampled userids belong to the same person is very small.
contrasting
train_13983
In the fake review detection research, researchers have manually label fake reviews and reviewers (Yoo and Gretzel 2009;Lim et al., 2010;Wang et al., 2011).
based on the actual fake reviews written using Amazon Mechanical Turk, Ott et al.
contrasting
train_13984
The first is that in (Chen et al., 2004).
as we discussed in related work, their approach is not applicable to reviews.
contrasting
train_13985
(Tumasjan et al., 2010;Giglietto, 2012;Kim and Park, 2012).
few of these studies have considered measures beyond simple hashtag frequencies, relative mention counts among politicians, and retweet counts.
contrasting
train_13986
de 'of', un 'a/one'); a few others are basic grammatical words (ne '[part of] not', et 'and'), or pronouns or verb forms referring to a single person or object (he/she/it), as well as one noun (France).
many female words (11/25) are pronouns or basic verb forms referring to the speaker or a single addressee (je 'I', mon/mes/ma 'my', tu 'you', j'ai 'I have').
contrasting
train_13987
Many other k-top words are familiar terms of address for men (lan, abi, karde sim, adam, kanka) or a greeting used mainly between men (eyvallah), suggesting that male users are addressing or discussing men more often than female users are.
9/25 of the k-top female words are pronouns referring to the speaker, a familiar addressee, or a third party (he/she/it), while none of the k-top male words are, suggesting female users are more often talking directly about themselves or to others.
contrasting
train_13988
These multimodal LDA models (hereafter, mLDA) have been shown to be qualitatively sensible and highly predictive of several psycholinguistic tasks (Andrews et al., 2009).
prior work using mLDA is limited to two modalities at a time.
contrasting
train_13989
Andrew (2006) also shows that semi-Markov CRF makes strictly weaker independence assumptions than linear CRF and so a word-based segmenter using an order-K semi-Markov model is more expressive than a characterbased model using an order-K CRF.
chinese words have internal structures.
contrasting
train_13990
In the PKU corpus, we do not see a gain using either the data combination approach or the relabelling approach compared to the performance of the Char-Segmenter after cotraining, probably because the Sun-Segmenter just modestly improves over the Char-Segmenter under the CEILING condition.
in the CU corpus, where under the CEILING condition the Sun-Segmenter has a much bigger gain over the Char-Segmenter, there is 0.7% improvement by using the data combination approach and 0.8% by using the relabelling approach, and the improvement is statistically significant.
contrasting
train_13991
A precise syntacto-semantic analysis of English requires a large detailed lexicon with the possibility of treating multiple tokens as a single meaning-bearing unit, a word-with-spaces.
parsing with such a lexicon, as included in the English Resource Grammar, can be very slow.
contrasting
train_13992
Passive-to-active voice transformation in English can be performed systematically, which does not depend on lexical information in most cases.
in Japanese, the method of transformation depends on lexical information.
contrasting
train_13993
4 Ideally, one case frame is constructed for each meaning and voice of the target predicate.
since Kawahara and Kurohashi's method is unsupervised, several case frames are actually constructed : Examples of case frames for " (be pushed down)" and " (push down)."
contrasting
train_13994
Since this case was aligned to no-case of the case frame " -2 (hit-2)," the input ga-case was alternated with no-case.
the cases of the other arguments " (bat-de)" and " (head-wo)" were output as they were in the passive sentence.
contrasting
train_13995
Other work tries to extract hypernym relations from large-scale encyclopedias like Wikipedia and achieves high precision (Suchanek et al., 2008;Hoffart et al., 2012).
the coverage is limited since there exist many infrequent and new entities that are missing in encyclopedias (Lin et al., 2012).
contrasting
train_13996
It is easy to get the hypernyms.
the entities in industry domain are more uncommon.
contrasting
train_13997
If perfect semantic relations were always available, SemRels would be the preferred mode.
this is often not the case and combining the three logic forms yields better performance (Section 4).
contrasting
train_13998
The only noticeable exception is LFT-score with sentences from SMTeuroparl.
best results for SMTeuroparl are obtained dropping unbound predicates and using All features.
contrasting
train_13999
Costello (2002) investigated the cognitive process that guides people's choice of words when making up a novel nounnoun compound.
we present a datadriven investigation to quantifying creativity in lexical composition.
contrasting