id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_2800
To capture more complex linguistic phenomena, leading approaches (Nakagawa et al., 2010;Jo and Oh, 2011;Kim et al., 2013) apply more advanced models but assume one document or sentence holds one sentiment.
this is often not the case.
contrasting
train_2801
Also, existing approaches infer sentiments without considering the changes of sentiments within or between clauses.
these changes can be successfully exploited for inferring fine-grained sentiments.
contrasting
train_2802
Although the word "ho-hum" indicates a negative polarity, it is not a frequent word.
the conjunction "but" clearly signals a contrast.
contrasting
train_2803
's approach works on the sub sentential level.
it differs from Re-New in three aspects.
contrasting
train_2804
Most of the ORE systems utilize weak supervision knowledge to guide the extracting process, such as: Databases (Craven and Kumlien, 1999), Wikipedia (Wu and Weld, 2007;Hoffmann et al., 2010), Regular expression (Brin, 1999;Agichtein and Gravano, 2000), Ontology (Carlson et al., 2010;Mohamed et al., 2011) or Knowledge Base extracted automatically from Internet (Mintz et al., 2009;Takamatsu et al., 2012).
when iteratively coping with large heterogeneous data, the ORE systems suffer from the "semantic drift" problem, caused by error accumulation (Curran et al., 2007).
contrasting
train_2805
The traditional segmentation method may generate four lexical features {'台北', '大安', '森 林', '公园'}, which is a partition of the relation mention.
the Omni-word feature denoting all the possible words in the relation mention may generate features as: Most of these features are nested or overlapped mutually.
contrasting
train_2806
The first impression is that more lexicon entries result in more power.
more lexicon entries also increase the computational complexity and bring in noises.
contrasting
train_2807
(2010) propose a crosslingual annotation projection approach which uses parallel corpora to acquire a relation detector on the target language.
the mapping of two entities involved in a relation instance may leads to errors.
contrasting
train_2808
Both corpora have the same entity/relation hierarchies, which define 7 entity types, 6 major relation types.
the Chinese corpus contains 633 documents and 9,147 positive relation instances while the English corpus only contains 498 files and 6,253 positive instances.
contrasting
train_2809
These models support efficient dynamic-programming inference, but only model local dependencies in the output label sequence.
citations have strong global regularities not captured by these models.
contrasting
train_2810
Note that some pairs of these constraints are redundant or logically incompatible.
we are using them as soft constraints, so these constraints will not necessarily be satisfied by the output of the model, which eliminates concern over enforcing logically impossible outputs.
contrasting
train_2811
Soft constraints can be implemented inefficiently using hard constraints and dual decomposition-by introducing copies of output variables and an auxiliary graphical model, as in .
at every iteration of dual decomposition, MAP must be run in this auxiliary model.
contrasting
train_2812
Furthermore the copying of variables doubles the number of iterations needed for information to flow between output variables, and thus slows convergence.
our approach to soft constraints has identical per-iteration complexity as for hard constraints, and is a very easy modification to existing hard constraint code.
contrasting
train_2813
Various term weighting strategies have been proposed and studied for the term-based representation (Amati and Van Rijsbergen, 2002;Singhal et al., 1996;Robertson et al., 1996).
existing studies on concept-based representation still used weighting strategies developed for term-based representation such as vector space models (Qi and Laquerre, 2012) and divergence from randomness (DFR) (Limsopatham et al., 2013a) and did not take the inaccurate concept mapping results into consideration.
contrasting
train_2814
Under such a situ- ation, traditional retrieval functions would likely work well and generate satisfying retrieval performance since the relations among concepts are independent which is consistent with the assumptions made in traditional IR (Manning et al., 2008).
the mapping results generated by MetaMap are not perfect.
contrasting
train_2815
Distributional representations of words have been successfully used in many language processing tasks such as entity set expansion (Pantel et al., 2009), part-of-speech (POS) tagging and chunking (Huang and Yates, 2009), ontology learning (Curran, 2005), computing semantic textual similarity (Besançon et al., 1999), and lexical inference (Kotlerman et al., 2012).
the distribution of a word often varies from one domain 1 to another.
contrasting
train_2816
The unsupervised DA setting that we consider does not assume the availability of labeled data for the target domain.
if a small amount of labeled data is available for the target domain, it can be used to further improve the performance of DA tasks (Xiao et al., 2013;Daumé III, 2007).
contrasting
train_2817
For representation, we considered distributional features u (i) in descending order of their scores given by Equation 4, and then taking the inverse-rank as the values for the distributional features (Bollegala et al., 2011).
none of these alternatives resulted in performance gains.
contrasting
train_2818
The selection of pivots is vital to the performance of SFA.
unlike SFA, which requires us to carefully select a small subset of pivots (ca.
contrasting
train_2819
For example, given the vectors representing red and car, composition derives a vector that approximates the meaning of red car.
the link between language and meaning is, obviously, bidirectional: As message recipients we are exposed to a linguistic expression and we must compute its meaning (the synthesis problem).
contrasting
train_2820
Andreas and Ghahramani (2013) discuss the the issue of generating language from vectors and present a probabilistic generative model for distributional vectors.
their emphasis is on reversing the generative story in order to derive composed meaning representations from word sequences.
contrasting
train_2821
They introduce a bidirectional languageto-meaning model for compositional distributional semantics that is similar in spirit to ours.
we present a clearer decoupling of synthesis and generation and we use different (and simpler) training methods and objective functions.
contrasting
train_2822
(2012) reconstruct phrase tables based on phrase similarity scores in semantic space.
they resort to scoring phrase pairs extracted from an aligned parallel corpus, as they do not have a method to freely generate these.
contrasting
train_2823
We observe that this model has a very high precision (since many token sequences marked as motifs would recur in similar contexts, and would thus have the same motif boundaries).
the rule-based method has a very row recall due to lack of generalization capabilities.
contrasting
train_2824
Ideally we hope to find a permutation of the surrogate weights to map to a tensor in such a way that the tensor has a rank as low as possible.
matrix rank minimization is in general a hard problem (Fazel, 2002).
contrasting
train_2825
In fact, we utilize this algorithm in our propagation step ( §2.4).
the former work operates only at the level of sentences, and while the latter does extend the framework to sub-spans of sentences, they do not discover new translation pairs or phrasal probabilities for new pairs at all, but instead re-estimate phrasal probabilities using the graph structure and add this score as an additional feature during decoding.
contrasting
train_2826
Some recent work has looked at anaphora resolution (Hardmeier and Federico, 2010) and discourse connectives (Cartoni et al., 2011;Meyer, 2011), to mention two examples.
1 so far the attempts to incorporate discourse-related knowledge in MT have been only moderately successful, at best.
contrasting
train_2827
As shown in Figure 2a, DR does not include any lexical item, and therefore measures the similarity between two translations in terms of their discourse structures only.
dR-LEX includes the lexical items to account for lexical matching; moreover, it separates the structure (the skeleton) of the tree from its labels, i.e.
contrasting
train_2828
Adding DR and DR-LEX to the combinations manages to improve over five and four of the six tuned ASIYA metrics, respectively.
some of the differences are very small.
contrasting
train_2829
Like the traditional phrase translation model, the translation score of each bilingual phrase pair is modeled explicitly in our model.
instead of estimating the phrase translation score on aligned parallel data, our model intends to capture the grammatical and semantic similarity between a source phrase and its paired target phrase by projecting them into a common, continuous space that is language independent.
contrasting
train_2830
by adding a term of 2 norm) to deal with overfitting.
we did not find clear empirical advantage over the simpler early stop approach in a pilot study, which is adopted in the experiments in this paper.
contrasting
train_2831
A common trait of all current approaches, in fact, is the reliance on batch learning techniques, which assume a "static" nature of the world where new unseen instances that will be encountered will be similar to the training data.
4 similarly to translation memories that incrementally store translated segments and evolve over time incorporating users style and terminology, all components of a CAT tool (the MT engine and the mechanisms to assign quality scores to the suggested translations) should take advantage of translators feedback.
contrasting
train_2832
If its value is larger than the tolerance parameter ( ), the weights of the model are updated as much as the aggressiveness parameter C allows.
with OSVR, which keeps track of the most important points seen in the past (support vectors), the update of the weights is done without considering the previously processed i-1 instances.
contrasting
train_2833
Recent studies (Zhu et al., 2013; show, however, that this approach can also achieve the state-of-the-art performance with improved training procedures and the use of additional source of information as features.
there is still room for improvement for these state-of-the-art transition-based constituent parsers.
contrasting
train_2834
For example, in Figure 1, for the input sentence w 0 w 1 w 2 and its POS tags abc, our parser can construct two parse trees using action sequences given below these trees.
parse trees in Treebanks often contain an arbitrary number of branches.
contrasting
train_2835
Assuming an input sentence contains n words, in order to reach a terminal state, the initial state requires n sh-x actions to consume all words in β, and n − 1 rl/rr-x actions to construct a complete parse tree by consuming all the subtrees in σ.
ru-x is a very special action.
contrasting
train_2836
One advantage of transition-based constituent parsing is that it is capable of incorporating arbitrarily complex structural features from the already constructed subtrees in σ and unprocessed words in β.
all the feature templates given in Table 1 are just some simple structural features.
contrasting
train_2837
Certain verbs sometimes have more than one obligatory preposition as in range from A to B.
the large majority of verbs satisfy rule (b).
contrasting
train_2838
In the E step of EM, we compute a probability distribution (according to the current model) over all possible completions of the observed data, and the expected counts of all types, which may be fractional.
note that in each completion of the data, the counts are integral.
contrasting
train_2839
(Also note that (6) becomes an exact solution to the marginal constraint.)
theoretically, this requires us to derive a new estimate for D. as this is not trivial, nearly all implementations simply use the original estimate (4).
contrasting
train_2840
Then, in the M step, we find the parameter values that maximize their likelihood.
mLE is prone to overfitting, one symptom of which is the "garbage collection" phenomenon where a rare English word is wrongly aligned to many French words.
contrasting
train_2841
Our method assembles observed names into an evolutionary tree.
the true tree must include many names that fall outside our small observed corpora, so our model would be a more appropriate fit for a far larger corpus.
contrasting
train_2842
This difference might have been caused by the above-mentioned errors.
at least, we can ascertain the important fact that the results for the corpora reduced by 1/100 are not so different from those of the original corpora from the perspective of their perplexity measures.
contrasting
train_2843
Kernel-based supervised methods such as dependency tree kernels (Culotta and Sorensen, 2004), subsequence kernels (Bunescu and Mooney, 2006) and convolution tree kernels (Qian et al., 2008) have been rather successful in learning this task.
purely supervised relation extraction methods assume the availability of sufficient labeled data, which may be costly to obtain for new domains.
contrasting
train_2844
This is because we have used a lot less labeled instances in the target domains: only 10% are used.
the gaps reduces when the number of source domains increases.
contrasting
train_2845
As we observed in a pilot experiment that there is a good chance that the predictions ranked in the second or third may still be correct, we select top three predictions as the candidate relations for each mention in order to introduce more potentially correct output.
we should discard the predictions whose confidences are too low to be true, where we set up a threshold of 0.1.
contrasting
train_2846
For example, in Figure 1, given USA as the subject of the relation Capital, we can only accept one possible object, because there is great chance that a country only have one capital.
given Washington D.C. as the object of the relation Capital, we can only accept one subject, since usually a city can only be the capital of one country or state.
contrasting
train_2847
All these results show that embedding the relation background information into RE can help eliminate the wrong predictions and improve the results.
in the Riedel's dataset, Mintz++, the MaxEnt relation extractor, does not perform well, and our framework cannot improve its performance.
contrasting
train_2848
For example, the fourth relation mention in Figure 1 should have been labeled by the relation Senate-of.
the incomplete knowledge base does not contain the corresponding relation instance (Senate-of(Barack Obama, U.S.)).
contrasting
train_2849
Figure 5 demonstrates that the ranks of data matrices are approximately 2,000 for the initial optimization of DRMC-b and DRMC-1.
those high ranks result in poor performance.
contrasting
train_2850
Transitional expressions provide glue that holds ideas together in a text and enhance the logical organization, which together help improve readability of a text.
in most current statistical machine translation (SMT) systems, the outputs of compound-complex sentences still lack proper transitional expressions.
contrasting
train_2851
Thus, generating transitional expressions is necessary for achieving grammatical cohesion.
it is not easy to produce such transitional expressions in SMT.
contrasting
train_2852
Second, the MT quality estimation might be inconsistent across different document-specific MT models, thus the confidence score is unreliable and not very helpful to users.
to traditional static MT quality estimation methods, our approach not only trains the MT quality estimator dynamically for each document-specific MT model to obtain higher prediction accuracy, but also achieves consistency over different document-specific MT models.
contrasting
train_2853
Conclusions with regard to context width may have to be tempered somewhat, as the performance of the l1r1 configuration was found to not be significantly better than that of the l2r2 configuration.
l1r1 performs significantly better than l3r3 at p < 0.01, and l2r2 performs significantly better than l3r3 at p < 0.01.
contrasting
train_2854
Another discrepancy is found in the BLEU scores of the English→Chinese experiments, where we measure an unexpected drop in BLEU score under baseline.
all other scores do show the expected improvement.
contrasting
train_2855
The task is related to exploratory search (Marchionini, 2006).
to classical information seeking, in exploratory search, the user is uncertain about the information available, and aims at learning and understanding a new topic (White and Roth, 2009).
contrasting
train_2856
We define Query-Chain Focused Summarization as follows: for each query in an exploratory search session, we aim to extract a summary that answers the information need of the user, in a manner similar to Query-Focused Summarization, while not repeating information already provided in previous steps, in a manner similar to Update Summarization.
to queryfocused summarization, the context of a sum-mary is not a single query, but the set of queries that led to the current step, their result sets and the corresponding summaries.
contrasting
train_2857
A summary about the hurricane need not contain all of these sentences as they are all describing the same thing.
it is not trivial for the lexically-motivated MMR algorithm to detect that events like "passed", "uprooted" or "damaged" are in fact repetitive.
contrasting
train_2858
At these higher thresholds, temporal information is still able to help get an improvement in R-2.
as this affects only very few out of the 44 document sets, statistical variances mean that these R-2 scores are no longer = = (R4) Muntadhar al-Zaidi, reporter of Baghdadiya television jumped and threw his two shoes one by one at the president, who ducked and thus narrowly missed being struck, raising chaos in the hall in Baghdad's heavily fortified green Zone.
contrasting
train_2859
1 It is a truth universally acknowledged that an annotation task in good standing be in possession of a measure of inter-annotator agreement (IAA).
no such measure is in widespread use for the task of syntactic annotation.
contrasting
train_2860
For example the only difference between the two leftmost trees in Figure 2 is a modifier, but δ plain gives them distance 4 and δ dif f 0.
δ dif f might underestimate some distances as well; for exam-ple the leftmost and rightmost trees also have distance zero using δ dif f , despite our syntactic intuition that the difference between a transitive and an intransitive should be taken account of.
contrasting
train_2861
The noun involved in the copula relation is actress and thus it is taken as the page's hypernym lemma.
the extracted hypernym is sometimes overgeneral (one, kind, type, etc.).
contrasting
train_2862
For example, consider the category FRENCH TELEVI-SION PEOPLE; since this category has no associated pages, in phase 2 no hypernym could be found.
by applying the sub-categories heuristic, we discover that TELEVISION PEOPLE BY COUNTRY is the hypernym most voted by our target category's descendants, such as FRENCH TELEVISION ACTORS and FRENCH TELEVISION DIRECTORS.
contrasting
train_2863
A second project, MENTA (de Melo and Weikum, 2010), creates one of the largest multilingual lexical knowledge bases by interconnecting more than 13M articles in 271 languages.
to our work, hypernym extraction is supervised in that decisions are made on the basis of labelled training examples and requires a reconciliation step owing to the heterogeneous nature of the hypernyms, something that we only do for categories, due to their noisy network.
contrasting
train_2864
We decided to include the latter for comparison purposes, as it uses knowledge from 271 Wikipedias to build the final taxonomy.
we recognize its performance might be relatively higher on a 2012 dump.
contrasting
train_2865
(2013) actually reported accuracy on this dataset.
since their system predicted answers for almost every question (p.c.
contrasting
train_2866
The body of work on factoid QA is too broad to be discussed here (see, e.g., the TREC workshops for an overview).
in the context of LS, Yih et al.
contrasting
train_2867
Model features associated with Elaboration relations are ranked highly by the learned model.
the answer preferred by the baseline contains mostly Joint relations, which "represent the lack of a rhetorical relation between the two nuclei" (Mann and Thompson, 1988) and have very small weights in the model.
contrasting
train_2868
The open-domain YA model learns to place more weight on LS features, which are unable to provide the same utility in the biology domain.
so far, we have treated Ls and discourse as distinct features in the reranking model, given that Ls features greatly improve the CR baseline, we hypothesize that a natural extension 16 The interpolation parameter was tuned on the YA development corpus.
contrasting
train_2869
(2011) generated semantic knowledge like causality that is written in no sentence.
their method cannot combine more than two pieces of knowledge unlike ours, and their target knowledge consists of nouns, but ours consists of verb phrases, which are more informative.
contrasting
train_2870
(2012) extracted 500,000 event causalities with about 70% precision.
as described in Section 1, our event causality criteria are different; since they regarded phrase pairs that were not self-contained as event causality (their annotators checked the original sentences of phrase pairs to see if they were event causality), their judgments tended to be more lenient than ours, which explains the performance difference.
contrasting
train_2871
Alternatively, a ranking ap-proach, similar to the one used to generate intranarrative temporal ordering, can also be extended to the cross-narrative case.
the features related to narrative structure and relative and implicit temporal expressions used for temporal ordering within a clinical narrative may not be applicable across narratives.
contrasting
train_2872
Sequence alignment algorithms have been developed and popularly used in bioinformatics.
multiple sequence alignment (MSA) has been shown to be NP complete (Wang and Jiang, 1994) and various heuristic algorithms have been proposed to solve this problem (Notredame, 2002).
contrasting
train_2873
These projects have produced knowledge bases containing many millions of relational facts between entities.
despite these impressive advances, there are still major limitations regarding precision.
contrasting
train_2874
Additionally, in order to ensure that fact candidates mentioned in similar sources have similar believability scores, our believability computation model incorporates influence of comentions.
we must avoid falsely boosting co-mentioned fact candidates.
contrasting
train_2875
For example, the triples: Obama born in Kenya and Obama graduated from Harvard are valid fact candidates.
the triple: Obama deserves Nobel Peace Prize is not.
contrasting
train_2876
This is because the cardinality of person died in location is one (1).
the cardinality of "INVERSE-OF(died in)" is many(n).
contrasting
train_2877
S is subjective, expressing the opinion of the author.
o is objective, stating only what has been alleged.
contrasting
train_2878
A summary of the outcome of the study is shown in Figure 1; 74% of the untrustworthy articles were independently labeled as subjective.
64% of trustworthy articles were independently labeled as objective.
contrasting
train_2879
In the first version of our model, we could simply set the score of each edge to be w•f (x i , x j ), and the MST recovered in this way would indeed be the highest scoring tree: arg max y P (y|x).
this straightforward approach doesn't apply to the full model which also uses sibling features.
contrasting
train_2880
Within a specific domain, terms typically just have a single sense.
our algorithms could certainly be adapted to the case of multiple term senses (by treating the different senses as unique nodes in the tree) in future work.
contrasting
train_2881
Direct indicators of hypernymy, such as Hearst-style context patterns, are the core feature for the model and are discovered automatically via discriminative training.
other indicators, such as coordination cues, can indicate that two words might be siblings, independently of what their shared parent might be.
contrasting
train_2882
The determiner (w 2 ) and the direct object (w 3 ) are correlated in that the choice of determiner depends on the plurality of w 3 .
the choice of verb (w 1 ) is mostly independent of the determiner.
contrasting
train_2883
As we discussed in §3.1 all elements of the distance matrix are functions of observable quantities if the underlying tree u is known.
only the word-word sub-block D W W can be directly estimated from the data without knowledge of the tree structure.
contrasting
train_2884
6 NP-hard (Desper and Gascuel, 2005) if u is allowed to be an arbitrary undirected tree.
if we restrict u to be in U, as we do in the above, then maximizingĉ(u) over U can be solved using the bilexical parsing algorithm from Eisner and Satta (1999).
contrasting
train_2885
We assume that child learners are able to infer a representation of the situational context from their non-linguistic environment.
in our simulations we approximate the environmental information by running a topic model (Blei et al., 2003) over a corpus of childdirected speech to infer a topic distribution for each situation.
contrasting
train_2886
Infants attend to distributional characteristics of their input (Maye et al., 2002(Maye et al., , 2008, leading to the hypothesis that phonetic categories could be acquired on the basis of bottom-up distributional learning alone (de Boer and Kuhl, 2003;Vallabha et al., 2007;McMurray et al., 2009).
this would require sound categories to be well separated, which often is not the case-for example, see Figure 1, which shows the English vowel space that is the focus of this paper.
contrasting
train_2887
The third factor, the likelihood of the vowel formants w hi in the categories given by the lexeme v l , is of the same form as the likelihood of vowel categories when resampling lexeme vowel assignments.
here it is calculated over the set of vowels in the token assigned to each vowel category (i.e., the vowels at indices where v t• = c).
contrasting
train_2888
Non-adjacent constraints are difficult for string-based approaches because of the exponential number of possible relationships across non-adjacent segments.
the Wolof results show that by learning violations directly, IBPOT does not encounter problems with non-adjacent constraints.
contrasting
train_2889
In recent years, there have been an increasing number of studies (Su et al., 2007;Kittur et al., 2008;Sheng et al., 2008;Snow et al., 2008;Callison-Burch, 2009) using crowdsourcing for data annotation.
because annotators that are recruited this way may lack expertise and motivation, the annotations tend to be more noisy and unreliable, which significantly reduces the performance of the classification model.
contrasting
train_2890
(2012) propose an algorithm which first trains individual SVM classifiers on several small, class-balanced, random subsets of the dataset, and then reclassifies each training instance using a majority vote of these individual classifiers.
the automatic correction may introduce new noise to the dataset by mistakenly changing a correct label to a wrong one.
contrasting
train_2891
Existing approaches toward bias detection have not gone far beyond "bag of words" classifiers, thus ignoring richer linguistic context of this kind and often operating at the level of whole documents.
recent work in sentiment analysis has used deep learning to discover compositional effects (Socher et al., 2011b;Socher et al., 2013b).
contrasting
train_2892
The strong correlation between US political parties and political ideologies (Democrats with liberal, Republicans with conservative) lends confidence that this dataset contains a rich mix of ideological statements.
the raw Convote dataset contains a low percentage of sentences with explicit ideological bias.
contrasting
train_2893
In Figure 5D, "be used as an instrument to achieve charitable or social ends" reflects a liberal ideology, which the model predicts correctly.
our model is unable to detect the polarity switch when this phrase is negated with "should not".
contrasting
train_2894
Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
the gain achieved by the semantic reordering model is limited in the presence of the syntactic reordering model, and we therefore provide a detailed analysis of the behavior differences between the two.
contrasting
train_2895
The popular distortion or lexicalized reordering models in phrase-based SMT make good local predictions by focusing on reordering on word level, while the synchronous context free grammars in hierarchical phrase-based (HPB) translation models are capable of handling non-local reordering on the translation phrase level.
reordering, especially without any help of external knowledge, remains a great challenge because an accurate reordering is usually beyond these word level or translation phrase level reordering models' ability.
contrasting
train_2896
Their linear classifier achieved a reported score of 39.06 2 when combining information from both translators and editors.
our proposed graph-based ranking framework achieves a score of 41.43 when using the same information.
contrasting
train_2897
The motivation for using lower order models is that shorter contexts may be observed more often and, thus, suffer less from data sparsity.
a single rare word towards the end of the local context will always cause the context to be observed rarely in the training data and hence will lead to an unreliable estimation.
contrasting
train_2898
It also confirms that our motivation to produce lower order n-grams by omitting not only the first word of the local context but systematically all words has been fruitful.
we also see that for the observed sequences the GLM performs slightly worse than MKN.
contrasting
train_2899
The idea behind copula theory is that the cumulative distribution function (CDF) of a random vector can be represented in the form of uniform marginal cumulative distribution functions, and a copula that connects these marginal CDFs, which describes the correlations among the input random variables.
in order to have a valid multivariate distribution function regardless of n-dimensional covariates, not every function can be used as a copula function.
contrasting