id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8500
Looking at "left" and "Sunday", it is quite straightforward to determine that they take place within the same time span.
it gets significantly more difficult when we look at "said" and "Sunday".
contrasting
train_8501
Methods proposed in this paper need further validation in large-scale corpus.
we will make more research on the Tibetan BaseNP templates using grammatical rules to produce a high quality results.
contrasting
train_8502
Therefore, the usage of the capitalization feature is not an option in Arabic NER.
the English translation of Arabic words may be exploited in this respect (Farber, Freitag, Habash and Rambow, 2008).
contrasting
train_8503
Indeed, adding the property trapped for the class Animals to Wikipedia, and filling in its corresponding values for all instances (kinds) of Animals, would likely have little value.
conjectural attributes like aggressive?
contrasting
train_8504
This is, of course, a positive behaviour when the system translates sentences from the same domain.
if this is not the case and the input sentences contain no or very few longer phrases from the translation tables, the system is not able to construct good translations from shorter phrases.
contrasting
train_8505
Our baseline systems, trained and tuned on general domain with maximum phrase length set to seven, translate general-domain test sets with an average phrase length of 3.49 (see Table 4).
for the systems tuned and tested on in-domain data, this score is as low as 1.80.
contrasting
train_8506
Specific to ranking problems, ensemble ranking or ranking aggregation methods have been widely studied in many different tasks especially in information retrieval.
there are limited work attempts on applying ensemble methods to summarization.
contrasting
train_8507
In order to identify the importance of sentences from multiple aspects, aggregation methods can be used in summarization to combine results from different summarizers.
aggregation methods for summarization are seldom been discussed in previous work.
contrasting
train_8508
For example, in our example thread, the message from Bill (first message) could be interpreted in isolation as a request from a peer or even a subordinate.
if you take into consideration that Barry delegated the task to Stephanie upon receiving the message from Bill, the first message could be considered as Bill assigning a task to Barry.
contrasting
train_8509
We will investigate using automatic taggers (such as a dialog act tagger and link predictor (Hu et al., 2009), an ODP tagger (Prabhakaran et al., 2012b)) to extract these features to predict power.
one main contribution of this paper is to show the interaction between these dimensions of the dialog (like dialog structure and ODP) and situational power, which is an important first step towards solving the problem.
contrasting
train_8510
In this sense, our approach is similar to theirs.
while their distance measure of dependency heterogeneity is limited to three easily-mapping common relationships between two languages, namely SUB, OBJ and NMOD, we further generalize to automatic mappings of any bilingual dependency relationships.
contrasting
train_8511
This trend suggests that there do exist some noisy mappings, more or less in both A Ψ and M Ψ .
removing them can only lead to limited performance improvement while too few mappings severely harm the performance.
contrasting
train_8512
Table 2), both sets of mapping are diverse without a dominant single mapping.
for adjectives dependency mapping amod-amod accounts for nearly 70% of total mappings.
contrasting
train_8513
Naively, one might think that simply replacing the simple clustering algorithm used by RR08 with a more elaborate approach would result in improved final categories.
as we show in Section 3, a variety of clustering approaches did not lead to a noticeable improvement.
contrasting
train_8514
(Goldwater et al., 2006;Johnson et al., 2007;Haghighi and Klein, 2007;Johnson and Goldwater, 2009)).
we are not aware of works that explored DP as a model for creating a diverse ensemble of experts for clustering tasks.
contrasting
train_8515
Exceptions include the hierarchical phrase-based SMT method of learning ITGs (Chiang, 2005), which does not rely on external resources.
unlike Chiang (2005) where huge numbers of (linguistically questionable) phrase translations are essentially memorized, our present work aims at inducing syntactic categories at an early stage in the learning, as occurs in child language acquisition.
contrasting
train_8516
The same modest improvement was observed when splitting the grammar with segment length 4, where we moved from 45.0 cross-entropy to 44.5 (model fstg_c_c to fstg_c_c_s2).
splitting again gave 42.9 (moving to fstg_c_c_s2_s2), and a third time gave 39.5 (moving to fstg_c_c_s2_s2_s2).
contrasting
train_8517
From the second column of the table we can observe that the correct translation is learned to be the best translation after bootstrapping with LTGs alone.
singleton translation and other synonyms of 里面 do not appear in the top five translations.
contrasting
train_8518
Our focus in this paper is hence restricted to distinguishing between codes TR2 and TR3.
our approach is extensible to the detection of TR1 if appropriate training data were available.
contrasting
train_8519
A2: Pronoun Count -Pronouns are typically discarded in most text classification applications in the pre-processing stage under the assumption that they occur too frequently to bear any information.
in (Campbell and Pennebaker, 2003) it was shown that changes in the way people use pronouns when writing about traumatic experiences is a powerful predictor of changes in physician visits or an indicator of their general health.
contrasting
train_8520
In all of these labels, there is extensive description of the distress condition in the messages.
there are many labels that are implied in the text, and are inconsistently inferred even amongst human annotators.
contrasting
train_8521
The higher the length of the vector, the longer the memory of the passage of an ant is kept.
the proportion of odour components deposited has the inverse effect.
contrasting
train_8522
We reimplemented them with the E x t Lesk measure and used the optimal parameters provided.
the similarity values are higher than with the standard Lesk algorithm and we had to adapt the parameters to reflect that difference.
contrasting
train_8523
When a voting strategy is used, ACA is ahead with 1.75% compared to Degree.
it is important to note that when looking a the scores per part of speech, Degree exhibits notably higher results for nouns (85% versus 76.35%), while ACA performs much better for adverbs adjectives and verbs (83.98%, 82.44%, 74.16%).
contrasting
train_8524
The vote strategy allowed ACA to reach the level of the first sense baseline, to beat the state-of-the-art unsupervised systems and the lowest performing supervised systems.
some open-ended questions remain.
contrasting
train_8525
The estimation of the parameters, whether manual or through an automated learning algorithm prevent these algorithms from being entirely unsupervised.
the degree of supervision remains far below supervised approaches that use training corpora approximately 1000 times larger.
contrasting
train_8526
The two annotations can be deterministically derived from one another and express a similar syntactic relation, namely in both cases the PP is the complement of the adjective "sure".
selecting one of the alternatives (the preposition) and not the other (the NP) results in a more learnable scheme.
contrasting
train_8527
Predictability yields similar results to learnability in the infinitive verb structure as well, both showing no strong bias.
in the two other structures results diverge.
contrasting
train_8528
The fact that it correlates with learnability can provide a partial explanation to the learnability results.
it has several disadvantages compared to learnability.
contrasting
train_8529
These approaches rely on both a knowledge source such as WordNet (Miller et al., 1993) and a semantic distance metric.
in the current approach we do not need such a knowledge source or similarity judgments, and since our approach is data-driven, selectors function as an abstraction of word instance context rather than as a list of semantically similar words.
contrasting
train_8530
In sentence 2 the immediate context before and after the target word seem contradictory: "... action have occurred..." implies occur-v.1 (to "happen or take place"), while "... occurred to him ..." implies occur-v.1 (to "come to mind").
when considering the whole context occur-v.2 fits best, and in fact, the selectors for this instance match with many of the most frequent selectors for occur-v.2 such as 'belong', 'lead', 'listen', and 'try'.
contrasting
train_8531
More importantly, the new morphology brought by the those phenomenon complicates any processing based on regular unknown word identification through suffix analysis.
our general annotation strategy consists in staying as close as possible from the French Treebank guidelines (Abeillé et al., 2003) in order to have a data set as compatible, as much as possible, with existing resources.
contrasting
train_8532
Regarding the Google web bank, the way annotation guidelines had to be extended to deal with user generated content is largely consistent between both treebanks.
our treebank differs from the Google Web Treebank in several aspects.
contrasting
train_8533
Majority of the research focussed on machine learning (ML) approaches (Bikel et al., 1999;Borthwick, 1999;Sekine, 1998;Lafferty et al., 2001a;Yamada et al., 2001) because these are easily trainable, adaptable to different domains and languages as well as their maintenance are also being less expensive.
rule-based approaches lack the ability of dealing with the problems of robustness and portability.
contrasting
train_8534
Some works on NER for Indian languages can be found in (Ekbal and Saha, 2011;Bandyopadhyay, 2009b, 2007;Li and McCallum, 2004;Patel et al., 2009;Srikanth and Murthy, 2008;Shishtla et al., 2008;Vijayakrishna and Sobha, 2008).
the works related to NER in Indian languages are still in the nascent stages due to the potential facts such as (Ekbal and Saha, 2011): • Unlike English and most of the European languages, Indian languages lack capitalization information, which plays a very important role in NE identification.
contrasting
train_8535
For example, in 34, the suffix -t-is rejected as being a conditional mood marker as it belongs to the category of must-end markers and cannot be followed by any other verb morpheme (except for the gender-number marker).
the habitual -t-may be followed by a tense auxiliary.
contrasting
train_8536
Such a leader is a skilled task leader, which corresponds to the social science theory put forth in Beebe and Masterson (2006).
a thought leader in the group is someone who has credibility in the group and introduces ideas or thoughts that others pick up on or support.
contrasting
train_8537
1998;Stolcke, et al., 2000;Ji & Bilmes, 2006, inter alia) or to map them onto subsequences or "dialogue games" (Carlson 1983;Levin et al., 1998), from which participants' functional roles in conversation (though not social roles) may be extrapolated (e.g., Linell, 1990;Poesio and Mikheev, 1998;Field et al., 2008).
the effects of speech acts on social behaviors and roles of conversation participants have not been systematically studied.
contrasting
train_8538
We could choose another baseline, such as selecting the participant with the most number of turns as the Leader or Influencer.
we see similar performance for such baselines as the random one.
contrasting
train_8539
This research field is becoming increasingly important.
most previous work depend on the availability of external knowledge sources or assume a static context around terms and expect the names to be the only changing factor.
contrasting
train_8540
The results presented in this paper are "anecdotal" (to use the words of the authors) and thus cannot be used for direct comparison.
because of the promising results we use the same method for defining a context.
contrasting
train_8541
The latter is an easier case of evolution because of the overlapping first name and can be targeted using entity consolidation or linking techniques (Shen et al., 2012;Ioannou et al., 2010).
most existing techniques do not take historic changes into account and only focus on merging concurrent representations of the same entity.
contrasting
train_8542
By comparing the boxplots of the two PRESEMT versions for BLEU, it can be seen that boxplots for PRESEMT-1 occupy similar ranges of the score range to those of PRESEMT-2.
the range for PRESEMT-1 is displaced towards lower values of BLEU in comparison to PRESEMT-2, while also a larger number of outliers exist for PRESEMT-1.
contrasting
train_8543
Thus, most median values of PRESEMT-1 for different sentence sizes are placed at lower BLEU levels, below the 0.15 mark, with only a few outliers exceeding the limited range of the boxplots.
when turning to PRESEMT-2, the median values are higher, exceeding 0.200 in most cases and even reaching 0.400 in some of the cases.
contrasting
train_8544
Conventional solutions for this problem rely on the use of tf-idf and similar measures, which indicate whether a keyword is specifically important within a selected document set or whether it is also frequent within the corpus of all documents.
when looking at geolocated documents the situation is quite different.
contrasting
train_8545
Furthermore, in the case of Twitter messages, which are bound to 140 characters, there will rarely be more than one occurrence of a specific term and it is thus not meaningful to calculate the term frequency for a single message.
if a set of messages M is examined -e.g.
contrasting
train_8546
In this mode, at least some terms related to the event (panel, international, comic) achieve a high ranking and can be seen in the visualization.
the top ranking terms are still dominated by terms of regional prominence (san, diego, center).
contrasting
train_8547
A harder problem emerging with the availability of data in many languages is the problem of discriminating between closely related languages.
only a few researchers dealt with that problem in the past.
contrasting
train_8548
An additional remark should be made that this system focuses on domain robustness and not the problem of discriminating similar languages.
textCat only uses the most frequent N-grams per language and Lingua::Identify uses prefixes, suffixes and frequent words and, apparently, none of these features can well discriminate similar languages such as the ones we deal with.
contrasting
train_8549
Blacklists could be built manually using linguistic intuitions by native speakers.
it is also possible to derive such data sets from corpora simply by comparing word frequencies.
contrasting
train_8550
We could even introduce yet another threshold to define a margin that describes the grey area of uncertain decisions.
in that case, we would end up with a classifier that uses the following decision rule: we do not apply this method in the present paper as this introduces yet another free parameter that needs to be adjusted.
contrasting
train_8551
In that case, they would easily end up in blacklists without being appropriate for language discrimination.
certain punctuation differences may also work quite well for distinguishing between languages.
contrasting
train_8552
The obtained p-values are presented in Table 5 The difference between the Naive Bayes and Blacklist classifiers has shown not to be statistically significant on this size of the evaluation set (p = 0.188), but the difference is an interesting fact that should be looked into in future work.
the difference between the Naive Bayes and Blacklist classifiers to the other classifiers is highly statistically significant (p < 0.001) while the difference between our baseline Markov chain and the langid.py classifier (0.027) is marginal (p = 0.094).
contrasting
train_8553
As expected, we can see a significant decline of the accuracy with very short documents.
already at about 70 words we have an overall performance of over 90%.
contrasting
train_8554
The unsupervised setting is important in itself, and the development of these methods arguably provides interesting insights into modeling implicit supervision signals present in unlabeled data.
given that small amounts of labeled data are often easy to obtain, it is surprising that no previous work that we are aware of looked into integration of labeled data into unsupervised SRL systems.
contrasting
train_8555
In the labeling stage, semantic roles are represented by clusters of arguments, and labeling a particular argument corresponds to deciding on its role cluster.
instead of dealing with argument occurrences directly, in BayesSRL they are represented as predicate-specific syntactic signatures, called argument keys.
contrasting
train_8556
We assume that there exists a fixed latent mapping g from argument keys to semantic roles and any such mapping is a-priori equiprobable, P(g) = const.
when generating a label g(k) for a key k, we assume that it can be replaced by any of the remaining R − 1 roles with small probability γ.
contrasting
train_8557
Similarly to SRL, semisupervised approaches in this area are also typically based on bootstrapping techniques (e.g., (Agichtein and Gravano, 2000;Rosenfeld and Feldman, 2007)) and often achieve impressive results.
their set-up is arguably different from ours as relation extractors are generally more precision-oriented, focus primarily on binary relations and can partially sidestep the complexity of language.
contrasting
train_8558
Continuous discourse relations are claimed to be easier to process and more expected than other types.
relations that are discontinuous (for example adversatives) would be less expected and more difficult to process.
contrasting
train_8559
It would also suggest that the annotators of the corpus tended to mark that relation even in the absence of direct textual signals.
a small value of implicitness means that the discourse relation is expressed with an explicit discourse cue more often than average, and we would interpret that as the relation being not easily predictable or difficult to process, such that an explicit marker is needed to avoid a peak in information density.
contrasting
train_8560
In the third and fourth rules, we see the structure of the English phrase and how it is translated into French.
to the baseline model, this approach can handle both token and phrase translations.
contrasting
train_8561
Researchers have proposed several ways to cope with this situation, and we plan to integrate some of these in our future work.
an alternative approach is to exploit grammar rules directly: this allows us to increase variety without introducing noisy translations, and we discuss this approach next.
contrasting
train_8562
Once these are fixed, we can use a subset of the topics to appropriately tune parameters for the rest.
better tuning methods need to be devised for a truly robust approach to combining these CLIR models.
contrasting
train_8563
These 'inductive' approaches have achieved respectable accuracy (60-70 F-measure against a dictionary) and are more portable than earlier methods.
their ability to improve in accuracy is limited by their inability to incorporate information beyond the GR co-occurrences and heuristics that identify candidate SCFs on a per-sentence basis.
contrasting
train_8564
sentence), not every GR type will be instantiated.
to model the multi-way co-occurrences in a tensor framework, each instance must have a feature for every mode to be incorporated into the tensor.
contrasting
train_8565
In summary, the proposed criteria for extraction of translation candidates are not biased towards high-frequency or low-frequency words, as they treat all words the same, trying to find potential candidates according to the defined set of features.
in practice, the majority of the matched candidates will be low-frequency words.
contrasting
train_8566
For instance, we could opt for another strategy when deciding how to change the size of sub-corpora, skip already processed sub-corpora, remodel the criteria for extraction from Section 2.2, change stopping criteria, or employ a procedure for the sub-corpora sampling different from the one presented in Subsection 2.3.1.
our main goal is to propose a general framework for lexicon extraction when the data sampling approach is employed, where other researchers could design their own algorithms built upon the same idea.
contrasting
train_8567
If that is not true, we could use SampLEX only to extract source words for which a translation might be found, but the particular translation for each extracted word could then be obtained by some other method.
it is not the case, as the results in Tables 5 and 6 reveal.
contrasting
train_8568
Online discussion forums are a valuable means for users to resolve specific information needs, both interactively for the participants and statically for users who search/browse over historical thread data.
the complex structure of forum threads can make it difficult for users to extract relevant information.
contrasting
train_8569
From Table 4 we can see a similar trend to that in Section 5.1, with our method improving over both baselines when we use either gold-standard or automatically-predicted features.
there are some notable differences.
contrasting
train_8570
We are also able to attain improvements in Solvedness classification accuracy using automatically-predicted thread discourse structure, although not at a level of statistical significance.
simulations suggest that as we improve the F-score of thread discourse structure parsing, the Solvedness classification accuracy will increase disproportionately.
contrasting
train_8571
Previous study indicates that the presence of discourse markers can great ly help relation recognition and the most general senses (i.e., co mparison, contingency, temporal and expansion ) can be disambiguated with 93% accuracy based solely on the discourse connectives (Pitler et al., 2008).
the absence of exp licit textual cues makes it very difficult to identify the implicit d iscourse relations.
contrasting
train_8572
The previous research on Chinese opinion analysis focuses on subjective expressions (opinionated sentences) (Liu, 2010), as in the Multilingual Opinion Analysis Task (MOAT) of NTCIR (Seki et al., 2010).
some objective expressions that describe positive or negative facts are also informative in that they express some kinds of evaluations.
contrasting
train_8573
how many arguments are shared by the two predicates (NUM_SHARED_ARGS)?
ψ S pays more attention to inferring the lexical semantic relations of edits.
contrasting
train_8574
The verbose, non-colloquial and monologue nature of much of Web text is not a good match for the characteristics of human-human dialogue.
there are some other sources on the Web that have more potential for this purpose, such as human-written comments on news websites (e.g., NYTimes.com (Marge et al., 2010)) and online forums (e.g., RottenTomatoes.com (Huang et al., 2007), 2ch.net (Inoue et al., 2011)).
contrasting
train_8575
Adding new patterns to support more types of questions or using punctuation to detect questions may seem to be straightforward solutions.
our intention to potentially incorporate speech input will render the latter option useless, while the former solution will increase false positives during question detection.
contrasting
train_8576
The POS feature stream contributes the most to the factored RNNLM.
rNNLM (Mikolov et al., 2011b) does not use it.
contrasting
train_8577
In both of the two examples (En 3 and En 4), supporting carries subjective senses.
the corresponding translations of supporting in Romanian for En 3 and En 4 are different: sprijinirea (in Ro 3) and sustinerea (in Ro 4).
contrasting
train_8578
It would be difficult to tell two domains apart based on the HMM labels since the same HMM states may generate many similar words from a variety of domains.
these unsupervised representations are not specifically discriminative for any NLP tasks.
contrasting
train_8579
Unsupervised discriminative model can directly learn synchronous grammars in a theoretically justified manner just like generative model.
the advantage over generative model is that it is able to easily incorporate word alignment information which has been proved useful in the two-step pipeline.
contrasting
train_8580
Z(s) is the partition function: Such a discriminative latent variable model is not new to SMT (Blunsom et al., 2008;Kääriäinen, 2009;Xiao et al., 2011).
we are distinguished from previous work by applying this model to synchronous grammar induction.
contrasting
train_8581
The purpose of the latent variable model in such previous work is to do max-translation decoding and training (Blunsom et al., 2008), or to eliminate the gap between heuristic extraction and decoding (Kääriäinen, 2009), instead of grammar induction as synchronous rules are still extracted by the heuristic two-step pipeline.
our interest lies in using latent variable model to learn synchronous grammar directly from sentence pairs.
contrasting
train_8582
In a synchronous hypergraph, a node is denoted by a nonterminal with a bispan.
a node in the source hypergraph is a nonterminal that spans a continuous sequence of words of source sentence.
contrasting
train_8583
Here, G denotes all potential synchronous grammars.
the size of potential SCFGs G is extremely large given a vocabulary Ω, resulting in a large number of hyperedges in source hypergraph.
contrasting
train_8584
The approximation makes the training algorithm tractable.
there is still one problem: how to efficiently construct the synchronous hypergraph?
contrasting
train_8585
The number of these rules is comparable with that of grammar extracted by the traditional pipeline, which has 13.2 millions rules.
the two grammars are quite different as shown in Table 1.
contrasting
train_8586
The improvement of UDSGI over Baseline is statistically significant (Koehn, 2004) We can not directly evaluate the quality of grammar, since there is not a golden grammar.
as grammar is used to generate target translations, it's reasonable to decide the quality of a grammar by testing what best translations it can produce.
contrasting
train_8587
The baseline still extracts rules from the 200K data, while our approach also learns grammar on 200K data.
we run GIZA++ on the entire LDC data.
contrasting
train_8588
In this way, they prefer word alignments that are consistent with syntactic structure alignments.
labeled word alignment data are required in order to learn the discriminative model.
contrasting
train_8589
There is a significant literature in sentence compression aimed at modeling the first of these, length: producing meaning-preserving alternations that reduce the length of the input string (Chandrasekar et al., 1996;Vanderwende et al., 2007;Clarke and Lapata, 2008;Cohn and Lapata, 2009;Yatskar et al., 2010).
we know of no previous work aimed at modeling meaning-preserving transformations that systematically transform the register or style of an input string.
contrasting
train_8590
Presumably this is due to the conflation of stylistic differences and semantic adequacy discussed in §3.
it also appears that the correlation between BLEU and human style judgments is too low to be of practical use for evaluating style.
contrasting
train_8591
Under an intension based perspective, a learner's primary goal is to understand word meanings, and it is the similarities between words'intensions that cause word choice confusions.
this is not to say that learners ignore word usages.
contrasting
train_8592
Using RCA, the system would consider all three to be mutually confusable because they appear almost equally frequently in the same context.
while the preposition selector considers p b and p c to be confusable with p a , it does not conclude that p b and p c are also mutually confusable under context C. Thus, if most usage contexts contain only one or two preposition types, the preposition selector and RCA may produce similar confusion sets; but if the data also include usage contexts that contain three or more preposition types, RCA may offer confusion sets based on a more globally optimized similarity metric.
contrasting
train_8593
The scores of baseline method 2 were zero for three programs, whose summaries did not share words with that of the target program.
proposed method 2 properly scored these programs.
contrasting
train_8594
The method presented in (Li et al., 2010) is slightly similar with our work for it considered novelty, coverage and balance wholly.
sentence features of existing literatures were usually acquired asynchronously.
contrasting
train_8595
We used the JUMAN/KNP 1 analyzer to parse sentences and obtain the structures automatically.
not every P-A pair is meaningful in information navigation; actually, only a fraction of the patterns are useful.
contrasting
train_8596
Even though word ordering errors in Chinese sentences may result in incorrect segmentations, the experiments show that the word-based approach (i.e., with segmentation) is better than the character-based approach (i.e., without segmentation) irrespective of the use of different segmentation systems and reference corpora.
although the Chinese Web POS 5-gram corpus is smaller than the Google Chinese Web 5-gram corpus, the experiments show that they have similar performances on the WOE detection.
contrasting
train_8597
Machine translation aims to generate a target sentence that is semantically equivalent to the source sentence.
most of current statistical machine translation models do not model the semantics of sentences.
contrasting
train_8598
Virtually, in order to project the translation candidates of source elements to target-side-like PAS, we require that a source argument only aligns to a target argument.
the result of bilingual SRL usually does not satisfy this requirement.
contrasting
train_8599
Intuitively, the combination can be operated directly by cube pruning (Chiang, 2007).
since the source elements are translated independently and many source elements' spans are very short, numerous phrase translation rules are ignored during translation.
contrasting