id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8300
Theories of the coherence of discourse and discourse relations (Barzilay and Lapata, 2005;Byron and Stent, 1998;Hobbs, 1979Hobbs, , 1985Mann and Thompson, 1988;Marcu and Echihabi, 2002) have proved useful for the semantic interpretation of discourse.
in the world of Twitter, Facebook, and other social media where people voluntarily join in the conversation, dialogue is often focused on the social engagements between participants.
contrasting
train_8301
the focus of the dialogue and its participants.
our early experimentation revealed that this straightforward approach of using Grosz and Sidner's framework with prevailing dialogue processing techniques fails to capture the complexities of human social interactions and is incapable of reliably inferring the social implicatures of dialogue.
contrasting
train_8302
Frameworks like DIT++ have extended the typical coverage of dialogue acts to encompass a boarder set of acts, such as social obligations.
when dialogue act schemes incorporate socially motivated acts often they do not fully take into account the multitude of purposes, social intentions, and ultimately the social implicatures of these acts.
contrasting
train_8303
The unigram and bigram (1+2-grams) based method performed the worst, mostly due to the size of data.
the gappy pattern approach was able to learn a mix of patterns of varying lengths and gaps (up to 2) that were able to separate the social acts.
contrasting
train_8304
Wong and Dras (2009) investigated particular types of syntactic error: subject-verb disagreement, noun-number disagreement, and determiner problems, relating the appearance of these errors to the features of relevant L1s.
they reported that these features do not help with classification, and they also note that character n-grams, though effective on their own, are not particularly useful in combination with other features.
contrasting
train_8305
In pursuing this experiment, we were indeed able to adduce new information about combinatory possibilities in CPs (section 5).
our experiment also provides a cautionary tale with respect to diving into a corpus "blindly", i.e, assuming that mere statistical analysis will provide good enough results and any noise due to language particular considerations will simply wash out if the corpus is large enough.
contrasting
train_8306
This turns out to be due to data sparsity, even with a 7.9 million token corpus.
we were able to identify previously unreported information about highly productive combinations and gain further structural insights into the language, particularly due to our use of novel methods coming from the field of visual analytics.
contrasting
train_8307
Since Urdu allows quite a bit of scrambling and also allows the nouns to be scrambled away from the light verbs, it was clear from the outset that we would not necessarily net all of the instances of N+V CPs that occur in the corpus.
we were not prepared for the amount of false hits we did get.
contrasting
train_8308
We found that a number of five clusters minimized the average distance between the nouns and the cluster centers.
both the table of numbers as illustrated in Table 2 as well as the clusters were difficult to evaluate in this form.
contrasting
train_8309
One preprocessing step that we could have done is to run a normalization module across the corpus.
this also requires specialized knowledge about the language/orthography and this source of errors was not large enough for us to take this step.
contrasting
train_8310
One is a normal space, the other is zero-width non-joiner (HTML code: ‌).
authors are not always consistent in their use, thus giving rise to errors in the corpus, which we again cannot deal with without adding time-consuming manual inspection coupled with deep language-particular knowledge to the process.
contrasting
train_8311
On the one hand, they employed lexical features, such as function words, frequently used character-based uni-, bi-, tri-grams as well as rare and most frequently used POS bi-and tri-grams.
they used three syntactic error types as features: misuse of determiners as well as subject-verb and noun-number disagreement.
contrasting
train_8312
Based on these results, Brooke and Hirst (2011) argue that a strong content bias is present in ICLE, allowing an easy classification by topic instead of by native language.
it remains unclear whether the poor Lang-8 results are not due to the properties of this specific corpus, which seems to be highly heterogeneous and incoherent, and whether the poor cross-corpus evaluation results are of general importance or due to the nature of the Lang-8 corpus.
contrasting
train_8313
At the end of Section 6, we hypothesized that the more abstract OCPOS-based n-grams may perform better than the surface-near word-based ones in cross-corpus evaluation.
the accuracies obtained using word-based n-grams are on average as good or better than the ones obtained using OCPOS-based n-grams (see Figure 6 and Table 4).
contrasting
train_8314
For this purpose we conducted a second set of experiments comparing single-corpus and cross-corpus results.
to their cross-corpus findings using the Lang-8 corpus, our results show that the patterns learned on ICLE do generalize well to other learner corpora.
contrasting
train_8315
91.53%, 87.5%, and 92.98% was observed for Person, Geo-political, and Organization type entities in the TAC2010 data (Ji et al., 2010) -in spite of a sense repository which is a priori quite vast.
the task of linking whichever concept mentions appear important in a corpus of very small documents should prove difficult, as it is more demanding in spite of a dearth of contextual evidence.
contrasting
train_8316
one that could, in principle, appear in Wikipedia).
the mention may refer to a valid concept, but there is not yet a corresponding Wikipedia page (see (Lin et al., 2012) for further discussion).
contrasting
train_8317
A number of well-known probabilistic topic modeling approaches such as Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003), have been explored to discover topics from a set of documents.
due to the shortness and lack of context, these topic modeling approaches may not work well with tweets.
contrasting
train_8318
Previous researches have shown that sentence compression can be used effectively in automatic summarization systems to produce more informative summaries by reducing the redundancy in the summary sentences (Jing, 2000;Knight and Marcu, 2002;Lin, 2003;Daumé III and Marcu, 2005;Zajic et al., 2007;Madnani et al., 2007;Martins and Smith, 2009;Berg-Kirkpatrick et al., 2011).
most of these researches either focused on the task of single document summarization and generic summarization or did not consider global properties of the sentence compression problem (Clarke and Lapata, 2008).
contrasting
train_8319
Here, all possible senses of a word are used as the alternatives.
in our ESSK formulation, we consider each word in a sentence as an "alphabet", and the alternative as its disambiguated sense found through a dictionary based disambiguation approach.
contrasting
train_8320
They propose a user model for jointly generating keywords and questions.
their approach is based on generating question templates from existing questions which requires a large set of English questions as training data.
contrasting
train_8321
The generated permutations provide alternatives to the original and thus the cover sentence can play the role of an information carrier.
not all the permutations are grammatical and semantically meaningful.
contrasting
train_8322
One difference between the proposed embedding method and the translation-based embedding methods is that the former restricts the length of a permutation to be the same as the cover sentence so that the receiver is able to recover the list of permutations; while the latter allows a permutation to only include a subset of the input words and therefore provides more choices for a given cover sentence.
dropping words introduces the risk of deleting information in the cover sentence and may lead to incoherence in the resulting text.
contrasting
train_8323
All the features we have described thus far have focused on sentence-level extraction.
document-level information also plays an important role in event extraction task.
contrasting
train_8324
Generally phrase-based SMT models outperform word-based ones (Koehn et al., 2003).
an SMT system fails to capture long-range contextual knowledge due to the limited horizon and the sparseness nature of lexical n-grams.
contrasting
train_8325
In this example, it correctly translates the diagnosis term "crystal induced arthritis".
it mistranslates its nearby phrasal verb "suffered from" by translating these two words separately.
contrasting
train_8326
Building an SMT system from large scale bilingual data for a specific application has become a practical option today.
sMT model heavily relies on the statistical evidences in the training corpus.
contrasting
train_8327
In our experimental domain (i.e., English medical summaries), bilingual medical dictionaries are available from plenty of resources and thus they are sufficient for collecting bilingual terminological units.
bilingual syntactic units are relatively hard to obtain.
contrasting
train_8328
Microblogging services have brought users to a new era of knowledge dissemination and information seeking.
the large volume and multi-aspect of messages hinder the ability of users to conveniently locate the specific messages that they are interested in.
contrasting
train_8329
Over time, a tremendous number of messages have been accumulated in their repositories, which greatly facilitate general users seeking information by querying their interested topics using the corresponding hashtag.
users often have to browse through large amount of results in order to find the information of their interests.
contrasting
train_8330
This observation verifies that our method is more stable in less training data.
our method fails for certain categories such as the Business and Education categories in Twitter dataset.
contrasting
train_8331
The approach we took for the generalized 2-gram models, and the hitting 2-gram model is the same.
the derivation for the value in our decision rule does not work out exactly, and only gives us a rough approximation of the probabilities.
contrasting
train_8332
This is why the classification accuracy for Dutch did not suffer as much as it did for Portuguese.
while the average testing document length for Chinese and Japanese is very short, we trained the algorithms with far more documents for these languages, and so the classification accuracies did not suffer.
contrasting
train_8333
How we count the number, type, and scope of errors in the learner's sentence depends on the relation between what was written and the annotator's correction.
on the task of error detection, how we score the performance of an NLP system depends on the relation between the system's output and both the learner's sentence and the annotator's correction.
contrasting
train_8334
3 EDMs can only work from the smallest unit they define, and in general the units seem to be variable.
finally, one benefit of EDMs is that they easily fit into calculations of P and R. by defining matches in terms of string distance, there is no way to talk about true negatives.
contrasting
train_8335
Among these, linguistically annotated corpora such as GENIA (Tateisi et al., 2000;Kim et al., 2003) have proven to be central to the NER solution.
due to the size of the vocabularies involved, annotated corpora by themselves do not provide a complete solution.
contrasting
train_8336
In summary it is thought that with partial matching, for the entity types examined so far, the core part of the entity was in most cases correctly found.
strict matching places too much faith in arbitrary choices in annotation guidelines.
contrasting
train_8337
They performed experiments which show that AZ can be used to identify and summarize novel contributions as well as background information in a scientific article.
they did not investigate integrating AZ in an automatic summarization system.
contrasting
train_8338
The summary lengths were controlled by using the distribution of themes in the abstract to select a proportionate number of sentences from each theme.
scientific articles differ from law judgments because the documents are highly structured and contain "sections" which are defined by the authors.
contrasting
train_8339
For example, the contraction I'm was parsed as two lexical items by MICA: I and 'm.
'm was not recognized as a form of the word am-itself a form of the verb to be-but rather was analyzed as a distinct verb.
contrasting
train_8340
In fact, even on moderately complex inputs, the generator quickly runs into a combinatorial explosion, having so far prevented the grammar from being usable for any serious real-time NLG tasks.
upon closer inspection of the source of the inefficiency, it became quickly apparent that the observed performance problems are the result of a conspiracy of several factors, most of which can be subsumed under the notion of non-configurationality: • Relatively free constituent order to English, constituent order in German clauses is relatively free, permitting permutation of complements, including the subject, as well as interspersal of modifiers in pretty much any position.
contrasting
train_8341
Rather, all three are general purpose, phenomenon-oriented regression test suites.
there are some differences in the design of the individual test suites that we expect to affect the impact of our performance improvements: while the MRS and TSNLP test suites consist of rather short utterances (MRS: 4.44 words/item, TSNLP: 4.76 words/item), Babel is slightly more complex (6.76 words/item).
contrasting
train_8342
We follow Faigley and Witte (1981) and define the top level layers Surface and Text-Base, which differentiate between meaning-preserving and meaningchanging edits.
contrary to Faigley and Witte (1981), we do consider all deletions and insertions of text as Text-Base changes.
contrasting
train_8343
Although we can assume that the FAs in our corpus have high quality, the NFAs show a broad quality spectrum according to the ratings by the WikiProjects' quality assessment teams, ranging from Start-to Good-class articles.
none of the NFAs have been rated with the highest quality scores, namely featured or A-class.
contrasting
train_8344
12), which does not come close to capturing the diversity in human query constructions or web scale.
we provide empirical results on 5000 queries from three query datasets with different noise levels.
contrasting
train_8345
Thus, W is constrained using the probability distribution R as: The singular variable W takes values from the universe of discourse U, such that values of W are singletons in U.
the semantic form of W is a variable whose values depend on the granular collections in U.
contrasting
train_8346
Similar to the approach presented here, Claveau & Kijak (2011) use translation equivalences between morphemes to generate translations and can handle fertility.
it is not suited for comparable corpora since it requires domain-specific parallel data (in their case, a multilingual terminology) to learn alignment probabilities.
contrasting
train_8347
Here, tweets are to a certain extent analogous to sentences in traditional extractive document summarization, which has been extensively studied in past decades (Ani Nenkova, 2011).
we argue that summarizing tweets is substantially different from summarizing news documents owing to the following reasons.
contrasting
train_8348
LexRank and TextRank make use of pairwise similarity between sentences, hypothesizing that the sentences similar to most of the other sentences in a cluster are more salient.
to the single level PageRank in LexRank and TextRank, MRC considers both internal and external constraints on three different levels, document, sentence, and term and achieves promising improvement.
contrasting
train_8349
A straightforward method would be to select the top-ranked tweets of the sub-topics.
this method would generate a redundant summary both at the tweet and sub-topic levels.
contrasting
train_8350
Low quality tweets account for 87%, which makes it difficult to achieve a very good prediction.
our concern is the performance on tweets of high quality because we want to select good tweets for summary generation.
contrasting
train_8351
Studies regarding vocabulary knowledge of second language learners have been mainly focusing on two major tasks: devising methods for measuring the size of the second language vocabulary of learners for testing purposes (Schmitt et al., 2001;Laufer and Nation, 1999;Nation, 1990) and determining the words that the learners should learn (Nation, 2006).
there have been few studies on what kind of words learners actually know.
contrasting
train_8352
Therefore, a number of machine learning methods, such as a support vector machine (SVM) for the binary classification task, can be used as predictors.
to answer our research question, what kind of words learners actually know, we want predictors to be able to do more than just predict.
contrasting
train_8353
This can be seen as filling in the blanks of a learner-word matrix.
the out-of-sample setting support new words, i.e., some or all words in the test data are missing in the training data.
contrasting
train_8354
Some interpretable weight vectors can determine word difficulty.
the perceived difficulty of a word differs from learner to learner.
contrasting
train_8355
Joint disambiguation and clustering enables us to exploit such connections: knowledge about which mentions refer to the same concept can support disambiguation decisions.
disambiguation influences clustering decisions.
contrasting
train_8356
On the other hand, disambiguation influences clustering decisions.
local approaches which disambiguate mentions independently of each other (Milne and Witten, 2008;Csomai and Mihalcea, 2008) can not take advantage of such relations.
contrasting
train_8357
The more distant a preferred name is from a mention, the less likely it is that the mention refers to this concept.
to local features, global features involve more than one mention.
contrasting
train_8358
Given our approach we disambiguate the whole document and not just the query terms.
to ACE 2005 the NILs are not just annotated as NILs but also clustered, which allows us to evaluate the entity clustering performance in a direct way and not just its influence on the disambiguation performance as on the ACE data.
contrasting
train_8359
This further supports our earlier discussion that our features as a representation type of coordination is more useful than the "conjunct is the head" representation used in the Penn2Malt conversion.
in the case of compound sentences (i.e.
contrasting
train_8360
The results show that the manual taxonomies have high quality well defined relations.
the novel automatic method is found to generate very high cohesion.
contrasting
train_8361
User studies are certainly important.
these studies often don't answer certain questions about the taxonomy.
contrasting
train_8362
To turn the flat LDA topic model into a navigable hierarchy, Griffiths and Tenenbaum (2004) describe a hierarchical LDA approach.
this was found to be prohibitively time consuming given our large data-set.
contrasting
train_8363
A problem with some of the manual taxonomies is the very high number of top level nodes, which makes it difficult for users to browse.
there is no obvious way to select suitable top level nodes in these taxonomies.
contrasting
train_8364
The results (Table 2) show that most of the taxonomies achieved roughly the same level of cohesion for the clusters, roughly between 50 and 63%.
the WikiFreq taxonomy performed far better, with only one unit of the 30 judged as not coherent.
contrasting
train_8365
This might be explained by considering that items grouped together under the same node will share a number of keywords which link to the same Wikipedia articles which would ensure that the items are very similar.
the Wikipedia taxonomy and DBpedia ontology use categories rather than articles in Wikipedia as the concept nodes.
contrasting
train_8366
The LCSH taxonomy has been manually created for the purpose of organising library collections and so might be the obvious choice to organise CH data online.
the results show that the relations within LCSH are defined less clearly than that of the Wikipedia derived taxonomies.
contrasting
train_8367
The tagset being composed of two tags, this dimension is also at 0 and the expressiveness of the annotation language is 0.25 (type language).
the ambiguity degree is high (1) as all occurrences are ambiguous.
contrasting
train_8368
Delimitation is 0, as gene names, in our example, are simple tokens.
the characterizing factors are low: the tagset is boolean (Dimension=0), a type language is used (Expressiveness=0.25) and ambiguity is very low as only few gene names are also common names (theoretical ambiguity can be approximated at 0.01 9 and residual ambiguity is on average of 0.04 for two annotators).
contrasting
train_8369
A limitation of CLRLM is that it depends either on a parallel corpus or on a bilingual dictionary.
for low resourced languages, parallel corpus seldom exist and dictionaries have poor vocabulary coverage.
contrasting
train_8370
we refer to as pseudo-relevant document sets from now on, can thus potentially replace the parallel corpus requirement of the CLRLM.
it is impractical to assume a one to one document level alignment between the pseudo-relevant documents in the two languages.
contrasting
train_8371
Thus, a natural question which arises is whether we need to introduce a new linear combination parameter to choose the two event paths with relative weights similar to (Chinnakotla et al., 2010).
a closer look at Equation 3 reveals that the contribution from each path is inherently controlled by the two coefficients P(w T |z T ) and P(w T |w S ), thus eliminating the need for an extra parameter.
contrasting
train_8372
In this work we present a novel approach to bootstrap domain specific terminology, namely Structured Term Recognition, and we apply it to the medical domain.
to previous approaches, based on observing distributional properties of terminology with respect to their contexts, our method analyzes the "internal structure" of multi-word terms by learning patterns of word clusters.
contrasting
train_8373
Combinations of STR and techniques based on contextual information are obviously possible.
in this paper we decided to focus on the contribution of the STR technique in isolation because we want to show that in the medical domain this information is a strong indicator of the entity type if well represented and handled.
contrasting
train_8374
boom /bu:m/, boomed /bu:md/ /1d/ e.g.
loot /lu:t/, looted /lu:t1d/ in Sanskrit, where oral tradition dominated the sphere of learning and an advanced discipline of phonetics explicitly described prosodic changes, these prosodic changes, well known by the term sandhi, are represented in writing.
contrasting
train_8375
Once an SH word was matched to all the possible MW headwords, the matching values were sorted and the headword with the highest match was marked as the suitable match.
if the top two headwords have the same matching score (this also includes the case, where all the headwords obtained a score of 0.0), the SH headword along with all the MW headwords and their meanings were dumped in a text file interface, where the exact match was decided manually.
contrasting
train_8376
Our analysis shows that the performance for in-domain data is largely dependent on the characteristics of the translation model.
performance in out-of-domain tasks relies on characteristics such as reordering and alignment distortion.
contrasting
train_8377
Regularization helps in part to alleviate over-fitting.
we performed several tests to ensure that over-fitting was not a problem.
contrasting
train_8378
More data needs to be analyzed in order to make more reliable estimations.
we should note that there is a consistent agreement between features that are important for BLEU, Meteor and TER.
contrasting
train_8379
Some of the conclusions from these models are straightforward and match the empirically developed intuition.
the insight gained from this type of analysis can be valuable for designing new systems for new translation tasks.
contrasting
train_8380
The classifiers trained on language model features (LM) and syntactic features (SYN) proved to be less effective predictors when taken on their own.
they proved to be valuable when combined with other feature groups.
contrasting
train_8381
Recent methods (Mitchell and Lapata, 2008;Erk and Padó, 2008;Thater et al., 2010) exploit direct syntactic/semantic relations with target words, but these methods would still fail in the above examples, because the direct structural neighbors of banks are the same in the two contexts.
syntactic/semantic structural co-occurrences of multiple words, obtained from the dependency relations, seem effective for this example.
contrasting
train_8382
We can regard a local context vector as representing a particular meaning the target word conveys in the given context.
these vectors may be too sparse to provide enough information, especially when the available context is short.
contrasting
train_8383
Naively enumerating all walk pairs (z, z ′ ) in Eq.
3is intractable when L is large, because the number of possible walks grows exponentially with length bound L. for the specific weight function p(z | w) given by Eq.
contrasting
train_8384
Desrosiers and Karypis use a random walk model that do not cast a bound on the length of walks.
our model poses a strict upper bound L on the walk length; this model was chosen because co-occurrences comprising hundreds of words are unlikely to be effective for contextual word discrimination.
contrasting
train_8385
Tree kernels are hence not suitable for measuring word similarity.
the bag-of-walks kernels distinguish target words from the other words in the sentence, as they only count walks starting from the target words.
contrasting
train_8386
Like bag-of-walks kernels, their kernel takes multi-word co-occurrences with a target word into account, but these co-occurrences are taken in terms of (gap weighted) n-gram collocations; i.e., their kernel counts the overlap of n-gram sequences surrounding the two target words.
bag-of-walks kernels compute multi-word co-occurrence in the syntactic/semantic structure present in parse graphs.
contrasting
train_8387
Like other languages, Japanese uses sentences composed of words.
we can also say that a Japanese sentence is composed of bunsetsu.
contrasting
train_8388
Suppose that we want to drop the word "kokusai" (international) from the noun and to shorten the bunsetsu to "kyoudou kenkyuu guru-pu ga".
bunsetsu-based methods cannot perform such flexible word selection because they are limited to unit constraints.
contrasting
train_8389
When dependency constraints were not considered, PROP performed better than BNST.
the performance differences between them were small and only the difference in ROUGE 1 was statistically significant (Wilcoxon signed-rank test, p < 0.05).
contrasting
train_8390
However, the performance differences between them were small and only the difference in ROUGE 1 was statistically significant (Wilcoxon signed-rank test, p < 0.05).
when dependency constraints were considered, PROP significantly outperformed BNST.
contrasting
train_8391
In this way, PROP w/ DPND selected more bunsetsu that contained important words and achieved higher performance than BNST w/ DPND.
when dependency constraints were not considered, the number of (shortened) bunsetsu that PROP selected was not so different from the number of that BNST selected (the difference was 2.4%).
contrasting
train_8392
BNST w/ DPND could not perform such an operation because the method has to treat each bunsetsu as it is.
pROp w/ DpND could shorten the bunsetsu by dropping word " ", which had less importance, from the bunsetsu (Scor e(" ") was 0.119).
contrasting
train_8393
With a best value of 38.02 % v-measure it seems that we simply cannot recreate the classes that our theory predicts.
it is worth taking a closer look at the results for the individual classes, since it turns out that the clustering quality varies greatly with the class that we are trying to produce.
contrasting
train_8394
We can conclude that argument structure, among others, is a fairly good indicator for automatically identifying the combined class {MAN,CRE}.
for the individual classes MAN and CRE, as well as for the directional class DIR, this is not the case.
contrasting
train_8395
The methods in this category consider both the query and the document.
an implicit assumption in this method is that the document to be summarized is relevant to the given query.
contrasting
train_8396
The ultimate measure is to put snippet directly in a search task and evaluate how well they can help accomplish the task (Tombros and Sanderson, 1998;White et al., 2003).
this method is very expensive and not reusable, and the utility measure is influenced by both the retrieval performance and the quality of snippets.
contrasting
train_8397
This means that these methods have a higher impact on top retrieval results.
we did not observe a significant difference when the snippets of the search engine are used.
contrasting
train_8398
This categorized lexicon is quite similar to word cluster features in (Ratinov and Roth, 2009), but is more precise.
to our certain knowledge, this resource remains unexplored in previous Chinese NER tasks.
contrasting
train_8399
After adding this the F-score on testset goes up from 94.1% to 94.4% for SGD and MADF, 95.5% for ADF, comparable to CRF++.
on a different dataset CWS PKU, adding this "F" feature decreases F-score by 0.1%, still higher than CRF++ by 0.3%.
contrasting