id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_2100
Despite the fairly large size of the overall training sets (9,000 documents), the amount of data for each target relation is apparently still not sufficient to learn particularly accurate weights for both BLPs and MLNs.
for BLPs, learned weights do show a substantial improvement initially (i.e.
contrasting
train_2101
Between BLPs and MLNs, BLPs perform substantially better than MLNs at most points in the curve.
mLN-manual-Weights improve marginally over BLP-Learned-Weights at later points (top 600 and above) on the curve, where the precision is generally very low.
contrasting
train_2102
For BLPs, as n increases towards including all of the logically sanctioned inferences, as expected, the precision converges to the results for logical deduction.
as n decreases, both adjusted and unadjusted precision increase fairly steadily.
contrasting
train_2103
In BLPs, only propositions that can be logically deduced from the extracted evidence are included in the ground network.
mLNs include all possible type-consistent groundings of all rules in the network, introducing many ground literals which cannot be logically deduced from the evidence.
contrasting
train_2104
Lack of sufficient training data is one of the reasons for learning less accurate weights by the MLN weight learner.
a more important issue is due to the use of the closed world assumption during learning, which we believe is adversely impacting the weights learned.
contrasting
train_2105
32.8% HMM), if images are not presented.
the difference is less pronounced when images are shown.
contrasting
train_2106
For example, the Alignment feature in Figure 2(a) is local, and thus can be computed a priori, but the Word Trigrams is not; in Figure 2(b) words in parentheses are subgenerations created so far at each word node; their combination gives rise to the trigrams serving as input to the feature.
this combination may not take place at their immediate ancestors, since these may not be adjacent nodes in the hypergraph.
contrasting
train_2107
In this section, we describe a model that captures arbitrarily deep hierarchies over such layers of coreference decisions, enabling efficient inference and rich entity representations.
to the pairwise model, where each entity is a flat cluster of mentions, our proposed model structures each entity recursively as a tree.
contrasting
train_2108
Subentities provide a tighter granularity of coreference and can be used to perform larger block moves during MCMC.
the hierarchy is fixed and shallow.
contrasting
train_2109
(2010) uses streaming clustering for large-scale coreference.
the greedy nature of the approach does not allow errors to be revisited.
contrasting
train_2110
There is also work on end-to-end coreference resolution that uses large noun-similarity lists (Daumé III and Marcu, 2005) or structured knowledge bases such as Wikipedia (Yang and Su, 2007;Haghighi and Klein, 2009;Kobdani et al., 2011) and YAGO (Rahman and Ng, 2011).
such structured knowledge bases are of limited scope, and, while Haghighi and Klein (2010) self-acquires knowledge about coreference, it does so only via reference constructions and on a limited scale.
contrasting
train_2111
Many advantages have been claimed for decision tree classifiers, including interpretability and robustness.
we suspect that the aspect most relevant to our case is that decision trees can capture non-linear interactions between features.
contrasting
train_2112
The Entropy of a cluster reflects how the members of the k distinct subgroups are distributed within each resulting cluster; the global quality measure is computed by averaging the entropy of all clusters: (2) where P (i, j) is the probability of finding an element from the category i in the cluster j, n j is the number of items in cluster j, and n the total number of items in the distribution.
to purity, the entropy decreases as the quality of clustering improves.
contrasting
train_2113
Previous works have showed that supervised learning methods are superior for this task.
the performance of supervised methods highly relies on manually labeled training data.
contrasting
train_2114
There are also lots of studies for cross-domain sentiment analysis (Blitzer et al., 2007;Tan et al., 2007;Li et al., 2009;Bollegala et al., 2011;He et al., 2011;Glorot et al., 2011).
most of them focused on coarse-grained document-level sentiment classification, which is different from our fine-grained word-level extraction.
contrasting
train_2115
We present a novel approach for building verb subcategorization lexicons using a simple graphical model.
to previous methods, we show how the model can be trained without parsed input or a predefined subcategorization frame inventory.
contrasting
train_2116
Large, manually-constructed SCF lexicons mostly target general language (Boguraev and Briscoe, 1987;Grishman et al., 1994).
in many domains verbs exhibit different syntactic behavior (Roland and Jurafsky, 1998;Lippincott et al., 2010).
contrasting
train_2117
Traditional learning approaches have relied on access to parallel corpora of natural language sentences paired with their meanings (Mooney, 2007;Zettlemoyer and Collins, 2007;Lu et al., 2008;Kwiatkowski et al., 2010).
constructing such semantic annotations can be difficult and time-consuming.
contrasting
train_2118
In this paper we presented a novel online algorithm for building a lexicon from ambiguously supervised relational data.
to the previous approach that computed common subgraphs between different contexts in which an n-gram appeared, we instead focus on small, connected subgraphs and introduce an algorithm, SGOLL, that is an order of magnitude faster.
contrasting
train_2119
Furthermore, when a sentence is unparsable with large tree fragments, the PTSG parser usually uses naive CFG rules derived from its backoff model, which diminishes the benefits obtained from large tree fragments.
current state-of-the-art parsers use symbol refinement techniques (Johnson, 1998;Collins, 2003;Matsuzaki et al., 2005).
contrasting
train_2120
An adaptor grammar (Johnson et al., 2007a) is a sort of nonparametric Bayesian TSG model with symbol refinement, and is thus closely related to our SR-TSG model.
an adaptor grammar differs from ours in that all its rules are complete: all leaf nodes must be terminal symbols, while our model permits nonterminal symbols as leaf nodes.
contrasting
train_2121
In recent years, statistical machine translation(SMT) has been rapidly developing with more and more novel translation models being proposed and put into practice (Koehn et al., 2003;Och and Ney, 2004;Galley et al., 2006;Liu et al., 2006;Chiang, 2007;Chiang, 2010).
similar to other natural language processing(NLP) tasks, SMT systems often suffer from domain adaptation problem during practical applications.
contrasting
train_2122
Using these models, the words can be clustered into the derived topics with a probability distribution, and the correlation between words can be automatically captured via topics.
the "bag-of-words" assumption is an unrealistic oversimplification because it ignores the order of words.
contrasting
train_2123
Here, t f out is the topic clustered from the corpus C f out , and t f in represents the topic derived from the corpus C f in .
the two above-mentioned probabilities can not be directly multiplied in formula (4) because they are related to different topic spaces from different corpora.
contrasting
train_2124
A simple way to compute the phrase-topic distribution is to take the fractional counts from C f in and then adopt MLE to obtain relative probability.
it is infeasible in our model because some phrases occur in C f out while being absent in C f in .
contrasting
train_2125
For this experimental result, we speculate that with the increment of in-domain monolingual data, the corresponding topic models provide more accurate topic information to improve the translation system.
this effect weakens when the monolingual corpora continue to increase.
contrasting
train_2126
Our transliteration mining model can mine transliterations without using any labelled data.
if there is some labelled data available, our system is able to use it effectively.
contrasting
train_2127
Our unsupervised transliteration mining system can be applied to language pairs for which no labelled data is available.
the unsupervised system is focused on high recall and also mines close transliterations (see Section 5 for details).
contrasting
train_2128
Thanks to sophisticated reordering models, stateof-the-art PSMT systems are generally good at handling local reordering phenomena that are not captured by phrase-internal reordering.
they typically fail to predict long reorderings.
contrasting
train_2129
The lexicalization is effective and increases the maximal rank (number of arguments) of the nonterminals by at most 1.
to a transformation into GREIBACH normal form, our lexicalization does not radically change the structure of the derivations.
contrasting
train_2130
The nonterminals of regular tree grammars only occur at the leaves and are replaced using first-order substitution.
the nonterminals of a CFTG are ranked symbols, can occur anywhere in a tree, and are replaced using second-order substitution.
contrasting
train_2131
The co-ranking framework has been initially developed for measuring scientific impact and modeling the relationship between authors and their publications (Zhou et al., 2007).
the adaptation of this framework to the tweet recommendation task is novel to our knowledge.
contrasting
train_2132
, r w is then ∏ w i=1 r i (t) under a given t. The value of t is chosen when the probability is maximized: In a simple random walk, it is assumed that all nodes in the matrix M are equi-probable before the walk.
we use the topic preference vector as a prior on M. Let Diag(r) denote a diagonal matrix whose eigenvalue is vector r. Then m becomes: Diversity We would also like our output to be diverse without redundant information.
contrasting
train_2133
In theory, we could also apply DivRank on the author graph.
as the authors are unique, we assume that they are sufficiently distinct and there is no need to promote diversity.
contrasting
train_2134
(2011) combine a classifier based on the k-nearest neighbors algorithm with a CRFbased model to leverage cross tweets information, and adopt the semi-supervised learning to leverage unlabeled tweets.
named entity normalization (NEN) for tweets, which transforms named entities mentioned in tweets to their unambiguous canonical forms, has not been well studied.
contrasting
train_2135
We introduce the task of NEN for tweets, a new genre of texts with rich entity variations.
to existing NEN systems, which take the output of NER systems as their input, our method conducts NER and NEN at the same time, allowing them to reinforce each other, as demonstrated by the experimental results.
contrasting
train_2136
As an illustrative example, consider the following three tweets: " Gaga" are all labeled as PERSON, and can be restored as "Lady Gaga".
to existing work, our method jointly conducts NER and NEN for multiple tweets.
contrasting
train_2137
Note that not every topic has a bursty interval.
a topic may have multiple bursty intervals and hence leads to multiple bursty topics.
contrasting
train_2138
* indicates an edge feature The linear-chain CRFs can represent the dependency between adjacent target sentences quite well.
they cannot model the dependency between adjacent source sentences, because labeling is done for each source sentence individually.
contrasting
train_2139
(2010) mined query logs to find attributes of entity instances.
these projects did not learn relative probabilities of different senses.
contrasting
train_2140
Because they only had four types, they were able to hand label their training data.
our system self-labels training examples by searching query logs for high-likelihood entities, and must handle any errors introduced by this process.
contrasting
train_2141
We compute this quantity exactly by evaluating the joint for each combination of t and i, and the observed values of q and c. It is important to note that at runtime when a new query is issued, we have to resolve the entity in the absence of any observed click.
we do have access to historical click probabilities, P (c | q).
contrasting
train_2142
For English sentiment classification, there are several labeled corpora available (Hu and Liu, 2004;Pang et al., 2002;Wiebe et al., 2005).
labeled resources in other languages are often insufficient or even unavailable.
contrasting
train_2143
Para-Cotrain: The training process is the same as MT-Cotrain.
we use a different set of English unlabeled sentences.
contrasting
train_2144
When 3500 labeled sentences are used, SVM achieves 80.58%, a relatively high accuracy for sentiment classification.
cLMM and the other two models can still gain improvements.
contrasting
train_2145
The asker wishes to know why his teeth bloods and how to prevent it.
the best answer only gives information on the reason of teeth blooding.
contrasting
train_2146
These error mining techniques have been applied with good results on parsing output and shown to help improve the large scale symbolic grammars and lexicons used by the parser.
the techniques they use (e.g., suffix arrays) to enumerate and count n-grams builds on the sequential nature of a text corpus and cannot easily extend to structured data.
contrasting
train_2147
In this task, the challenge is to find a replacement for a word or phrase removed from a sentence.
to our SAT-inspired task, the original answer is indicated.
contrasting
train_2148
A larger m indicates a more accurate heuristic, which results in a more efficient A* search (fewer nodes being processed).
this efficiency comes with the price that such an accurate heuristic requires more computation time in the Viterbi backward pass.
contrasting
train_2149
We refer to this setting as bootstrapping.
typical semi-supervised learning deals with a large number of labelled points, and a domain adaptation task with unlabelled points from the new domain.
contrasting
train_2150
Being largely languageuniversal, the selection component is learned in a supervised fashion from all the training languages.
the ordering decisions are only influenced by languages with similar properties.
contrasting
train_2151
In this paper, we present a new multilingual algorithm for dependency parsing.
to previous approaches, this algorithm can learn dependency structures using annotations from a diverse set of source languages, even if this set is not related to the target language.
contrasting
train_2152
Overall, our method performs similarly to this oracle variant.
the gain for non Indo-European languages is 1.9% vs -1.3% for Indo-European languages.
contrasting
train_2153
Overall, the performance gap between the selective sharing model and its monolingual supervised counterpart is 7.3%.
the unsupervised monolingual variant of our model achieves a meager 26%.
contrasting
train_2154
Metalanguage is an essential linguistic mechanism which allows us to communicate explicit information about language itself.
it has been underexamined in research in language technologies, to the detriment of the performance of systems that could exploit it.
contrasting
train_2155
The adoption of this definition was motivated by a desire to study mentioned language with precise, repeatable results.
it was too abstract to consistently apply to large quantities of candidate phrases in sentences, a necessity for corpus creation.
contrasting
train_2156
A human reader with some knowledge of the usemention distinction can often intuit the presence of mentioned language in a sentence.
to operationalize the concept and move toward corpus construction, it was necessary to create a rubric for labeling it.
contrasting
train_2157
These results were taken as moderate indication of the reliability of "simple" use-mention labeling.
the per-category results showed reduced levels of agreement.
contrasting
train_2158
8  They use Kabuki precisely because they and everyone else have only a hazy idea of the word's true meaning, and they can use it purely on the level of insinuation.
9 the connection between mentioned language and stylistic cues is only valuable when stylistic cues are available.
contrasting
train_2159
(2004), who created a corpus of metalanguage from a subset of the British National Corpus, finding that approximately 11% of spoken utterances contained some form (whether explicit or implicit) of metalanguage.
limitations in the Anderson corpus' structure (particularly lack of word-or phrase-level annotations) and content (the authors admit it is noisy) served as compelling reasons to start afresh and create a richer resource.
contrasting
train_2160
Most of the current statistical approaches to SRL are supervised, requiring large quantities of human annotated data to estimate model parameters.
such resources are expensive to create and only available for a small number of languages and domains.
contrasting
train_2161
Inducing them solely based on monolingual data, though possible, may be tricky as selectional preferences of the roles are not particularly restrictive; similar restrictions for patient and agent roles may further complicate the process.
both sentences (a) and (b) are likely to be translated in German as '[ A0 Peter] beschuldigte [ A1 Mary] [ A2 einen Diebstahl zu planen]'.
contrasting
train_2162
Transition-based parsing algorithms, such as shiftreduce algorithms (Nivre, 2004;Zhang and Clark, 2008), are widely used for dependency analysis because of the efficiency and comparatively good performance.
these parsers have one major problem that they can handle only local information.
contrasting
train_2163
Finally, the combined treebank are used to train a better parser.
the inconsistencies among different treebanks are normally nontrivial, which makes rule-based conversion infeasible.
contrasting
train_2164
Their framework is similar to ours.
handling syntactic annotation inconsistencies is significantly more challenging in our case of parsing.
contrasting
train_2165
Their experiments show that the combined treebank can significantly improve the performance of constituency parsers.
their method requires several sophisticated strategies, such as corpus weighting and score interpolation, to reduce the influence of conversion errors.
contrasting
train_2166
Our approach is also intuitively related to stacked learning (SL), a machine learning framework that has recently been applied to dependency parsing to integrate two main-stream parsing models, i.e., graph-based and transition-based models (Nivre and McDonald, 2008;Martins et al., 2008).
the SL framework trains two parsers on the same treebank and therefore does not need to consider the problem of annotation inconsistencies.
contrasting
train_2167
For the sports dataset, since mentions contain person and organization named entity types, our score for clustering uses the Jaccard distance between context words of the mentions.
such clusterings do not produce columns.
contrasting
train_2168
All of these models jointly label aligned source and target sentences.
our model is not concerned with tagging English sentences but only tags foreign sentences in the context of English sentences.
contrasting
train_2169
fine-grained entity types of two arguments, to handle polysemy.
such fine grained entity types come at a high cost.
contrasting
train_2170
To differentiate between these senses we need types such as "Politician" or "Athlete".
for "A, the parent of B" we only need to distinguish between persons and organizations (for the case of the sub-organization relation).
contrasting
train_2171
They cast the relaxed assumption as multi-instance learning.
even the relaxed assumption can fail.
contrasting
train_2172
In these expressions, all information needed for normalization is contained in the linguistic expression.
absolute dates are relatively infrequent in our corpus (7%), so in order to broaden the coverage for the detection of salient dates, we decided to consider relative dates, which are far more frequent.
contrasting
train_2173
In case of TEs with a granularity superior to the day or month, DD and MM fields remain unspecified accordingly.
these underspecified dates are not used in our experiments.
contrasting
train_2174
Many scientific subjects, such as psychology, learning sciences, and biology, have adopted computational approaches to discover latent patterns in large scale datasets (Chen and Lombardi, 2010;Baker and Yacef, 2009).
the primary methods for historical research still rely on individual judgement and reading primary and secondary sources, which are time consuming and expensive.
contrasting
train_2175
Such models first estimate word translation probabilities conditioned on topics, and then adapt lexical weights of phrases * Corresponding author by these probabilities.
the state-of-theart SMT systems translate sentences by using sequences of synchronous rules or phrases, instead of translating word by word.
contrasting
train_2176
We call such a rule a topicinsensitive rule.
the distributions of the rest rules peak on a few topics.
contrasting
train_2177
As described in the previous section, we also estimate the target-side rule-topic distribution.
only source document-topic distributions are available during decoding.
contrasting
train_2178
In addition to the traditional hiero system, we also compare with the topic-specific lexicon translation method in Zhao and Xing (2007).
the lexicon translation probability is adapted by: we simplify the estimation of p(e|f, z = k) by directly using the word alignment corpus with table 3: Percentage of topic-sensitive rules of various types of rule according to source-side ("Src") and targetside ("tgt") topic distributions.
contrasting
train_2179
The computational complexity of PTK is O(pρ (Moschitti, 2006a), where p is the largest subsequence of children that we want consider and ρ is the maximal outdegree observed in the two trees.
the average running time again tends to be linear for natural language syntactic trees (Moschitti, 2006a).
contrasting
train_2180
BOL cannot capture the same dependencies as the structural kernels.
when we remove the dependencies generated by shared documents between a node and its descendants (child-free setting) BOL improves on BL.
contrasting
train_2181
Prepositions and conjunctions are often assumed to depend on lexical dependencies for correct resolution (Jurafsky and Martin, 2008).
lexical statistics based on the training set only are typically sparse and have only a small effect on overall parsing performance (Gildea, 2001).
contrasting
train_2182
Unlabeled data has been shown to improve the accuracy of conjunctions within complex noun phrases Bergsma et al., 2011).
it has so far been less effective within full parsing -while first-order web-scale counts noticeably improved overall parsing in Bansal and Klein (2011), the accuracy on conjunctions actually decreased when the web-scale features were added (Table 4 in that paper).
contrasting
train_2183
Lexical affinity helps in such cases.
some attachments like configuration ADJ and NdeN, for which the parser showed very good accuracy (96.6 and 92.2) show very poor performances.
contrasting
train_2184
Developing a supervised or semi-supervised model of discourse segmentation would require ground truth annotated based on a well-established representation scheme, but as of right now no such annotation exists for Chinese to the best of our knowledge.
syntactically annotated treebanks often contain important clues that can be used to infer discourse-level information.
contrasting
train_2185
Both Nissim (2006) and Rahman and Ng (2011) classify each mention individually in a standard supervised ML setting, not considering potential dependencies between the IS categories of different mentions.
collective or joint classification has made substantial impact in other NLP tasks, such as opinion mining (Pang and Lee, 2004;Somasundaran et al., 2009), text categorization (Yang et al., 2002;Taskar et al., 2002) and the related task of coreference resolution (Denis and Baldridge, 2007).
contrasting
train_2186
At first, this seems to contradict studies such as Cahill and Riester (2009) that find a variety of precedences according to information status.
many of the clearest precedences they find are more specific variants of the old > p mediated or old > p new precedence or they are preferences at an even finer level than the one we annotate, including for example the identification of generics.
contrasting
train_2187
Rule-based systems (Castano and de Antonellis, 1999;Milo and Zohar, 1998;L. Palopol and Ursino, 1998) often utilize only the schema information (e.g., elements, domain types of schema elements, and schema structure) to define a similarity metric for performing matching among the schema elements in a hard coded fashion.
learning based approaches learn a similarity metric based on both the schema information and the data.
contrasting
train_2188
High-quality senseannotated data, however, are hard to be obtained in streaming environments, since the training corpus would have to be constantly updated in order to accomodate the fresh data coming on the stream.
few positive examples plus large amounts of unlabeled data may be easily acquired.
contrasting
train_2189
For example, on a TAC corpus with 1.8M documents, we found that increasing the corpus size ten-fold consistently results in statistically significant improvement in F1 on two standardized relation extraction metrics (t-test with p=0.05).
increasing human feedback amount ten-fold results in statistically significant improvement on F1 only when the corpus contains at least 1M documents; and the magnitude of such improvement was only one fifth compared to the impact of corpus-size increment.
contrasting
train_2190
The locations of +'s suggest that the influence of human feedback becomes notable only when the corpus is very large (say with 10 6 docs).
comparing the slopes of the curves in Figure 3 tably improve precision on either the full corpus or on a small 1K-doc corpus.
contrasting
train_2191
: Figure 1 can be obtained from various sources such as dictionaries, gazetteers, rule-based systems (Strötgen and Gertz, 2010), statistically trained classifiers (Ratinov and Roth, 2009), or some web resources such as Wikipedia (Ratinov et al., 2011).
in practice, outputs from existing mention identification and typing systems can be far from ideal.
contrasting
train_2192
One important clue is that March appears after the word in and is located nearer to other mentions that can be potentially useful arguments.
encoding such information as a general constraint can be inappropriate, as potentially better structures can be found if one considers other alternatives.
contrasting
train_2193
Out of those 6 patterns, 2 are more general patterns shared across different events, and 4 are event-specific.
for example, for the "Die" event, the supervised approach requires human to select from 174 candidate mentions and annotate 89 of them.
contrasting
train_2194
This indicates that our model is able to learn to generalize with features through the guidance of our informative preferences.
we also note that the performance of preference modeling depends on the actual quality and amount of preferences used for learning.
contrasting
train_2195
The model has been shown to work in unsupervised tasks such as POS induction (Smith and Eisner, 2005a), grammar induction (Smith and Eisner, 2005b), and morphological segmentation (Poon et al., 2009), where good neighborhoods can be identified.
it is less intuitive what constitutes a good neighborhood in this task.
contrasting
train_2196
Therefore, PLSA finds a topic distribution for each concept definition that maximizes the log likelihood of the corpus X (LDA has a similar form): In this formulation, missing words do not contribute to the estimation of sentence semantics, i.e., excluding missing words (X ij = 0) in equation 1 does not make a difference.
empirical results show that given a small number of observed words, usually topic models can only find one topic (most evident topic) for a sentence, e.g., the concept definitions of bank#n#1 and stock#n#1 are assigned the financial topic only without any further discernability.
contrasting
train_2197
While this is the ideal data set for SS, the small size makes it impossible for tuning SS algorithms or deriving significant performance conclusions.
the MSR04 data set comprises a much larger set of sentence pairs: 4,076 training and 1,725 test pairs.
contrasting
train_2198
Generally results are reported based on the last iteration.
we observe that for model 6 in table 2, the best performance occurs at the first few iterations.
contrasting
train_2199
Reisinger and Mooney (2010b) introduced a multi-prototype VSM where word sense discrimination is first applied by clustering contexts, and then prototypes are built using the contexts of the sense-labeled words.
in order to cluster accurately, it is important to capture both the syntax and semantics of words.
contrasting