id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_1800
After filtering for duplicates and removing empty or otherwise unusable emails, the total number of emails is 245K, containing roughly 90 million words.
this total includes emails to non-Enron employees, such as family members and employees of other corporations, emails to multiple people, and emails received from Enron employees without a known corporate role.
contrasting
train_1801
This annotation method does not take into account promotions over time, secretaries speaking on behalf of their supervisors, or other causes of relationship irregularities.
this misinformation would, if anything, generally hurt our classifiers.
contrasting
train_1802
Our feature selection method picks up on indicators suggested by sociolinguistics, and it also allows for the identification of features that are not obviously characteristic of UpSpeak or Down-Speak.
some easily recognizable features include: other features are less intuitive: I'll, we'll "I'll let you know the final results soon" "Everyone is very excited […] and we're confident we'll be successful" Downspeak that is, this is "Neither does any other group but that is not my problem" "I think this is an excellent letter" We hope to improve our methods for selecting and binning features with information theoretic selection metrics and clustering algorithms.
contrasting
train_1803
Preliminary experiments incorporating PeerSpeak n-grams yield slightly better numbers.
early results also suggest that the threeway classification problem is made more tractable with cascaded two-way classifiers; feature selection was more manageable with binary problems.
contrasting
train_1804
As we will show below, even the simple filters (i)-(v) are sufficient to learn high-quality association scores; this means that we do not need the complex features of "deterministic" systems.
if such complex features are available, then we can use them to improve performance in our self-trained setting.
contrasting
train_1805
As we will see below, using N scores acquired from an unlabeled corpus as the only source of information for CoRe performs surprising well.
the weaknesses of this approach are (i) the failure to cover pairs that do not occur in the unlabeled corpus (negatively affecting recall) and (ii) the generation of pairs that are not plausible candidates for coreference (negatively affecting precision).
contrasting
train_1806
While one can certainly make use of a more sophisticated feature set, we leave this for future work as our focus is to scale up inference.
it should be noted that this approach is agnostic to the particular set of features used.
contrasting
train_1807
(2003) showed that the relative comparison of two candidate antecedents leads to obtaining better accuracy than the pairwise model.
these approaches do not output absolute probabilities, but relative significance between two candidates, and therefore cannot be directly integrated with the ILP-framework.
contrasting
train_1808
To our knowledge, FrameNet has not been exploited for coreference resolution.
the use of related verbs is similar in spirit to Bean and Riloff's (2004) use of patterns for inducing contextual role knowledge, and the use of semantic roles is also discussed in Ponzetto and Strube (2006).
contrasting
train_1809
If exactly one of NP j and NP k is tagged as a NE by the Stanford NE recognizer, we create a semi-lexical feature that is identical to the lexical feature described above, except that the NE is replaced with its NE label.
if both NPs are NEs, we check whether they are the same string.
contrasting
train_1810
To this end, we introduce a rule extraction and weight training method for LMBOT that is based on the corresponding procedures for STSG and STSSG.
general LMBOT can be too expressive in the sense that they allow translations that do not preserve regularity.
contrasting
train_1811
STSG are always contiguous in both the left-and right-hand side, which means that they (completely) cover a single span of input or output words.
sTssG rules can be noncontiguous on both sides, but the extraction procedure of sun et al.
contrasting
train_1812
Verbs are keys for reordering especially for Araic-English with VSO translated into SVO.
if the verb and its relevant arguments for reordering are at different levels in the tree, the reordering is difficult to model as more interior nodes combinations will distract the distributions and make the model less focused.
contrasting
train_1813
2010combined the one class per word type constraint (Brown et al., 1992) in a HMM with a Dirichlet prior to achieve both forms of sparsity.
this work approximated the derivation of the Gibbs sampler (omitting the interdependence between events when sampling from a collapsed model), resulting in a model which underperformed Brown et al.
contrasting
train_1814
Sets of dots and optional diacritic markers are used to create character distinctions in Arabic.
trace amounts of dust or dirt on the original document scan can be easily mistaken for these markers (Darwish and Oard, 2002).
contrasting
train_1815
As mentioned in Section 4.1, the trivial baseline of the test set is comparable to the dev set.
the test set is harder to tag than the dev set; this can be seen in the overall lower F-scores.
contrasting
train_1816
Unlike Modern Hebrew, Latin does not require extensive morpheme segmentation 2 .
it does have a relatively free word order, and is also highly inflected, with each word having up to nine morphological attributes, listed in Table 2.
contrasting
train_1817
The parser has no local factors, but has the same variables as the joint model and the same features from all three families of link factors ( §3).
since it takes as input the morphological attributes predicted by the tagger, the TAG variables are now observed.
contrasting
train_1818
We find the dis- tributions over the frequencies of particular errors follow a Zipfian skew across both S&B datasets, with the Arabic being more pronounced (the most frequent error being made 27 times, with 627 errors being made just once) in comparison with the Hebrew (with the most frequent error being made 19 times, and with 856 isolated errors).
in both the Arabic and Hebrew S&B tasks we find that a tendency to over-segment certain characters off of their correct morphemes and on to other frequently occurring, yet incorrect, particles is actually the cause of many of these isolated errors.
contrasting
train_1819
A source of statistics widely used in prior work is the query log (Cucerzan and Brill, 2004;Ahmad and Kondrak, 2005;Li et al., 2006a;Chen et al., 2007;Sun et al., 2010).
while query logs are abundant in the context of Web search, in many other search applications (e.g.
contrasting
train_1820
A language model was found to outperform a maximum entropy classifier (Gamon, 2010).
the language model was trained on the Gigaword corpus, 17 • 10 9 words (Linguistic Data Consortium, 2003), a corpus several orders of magnitude larger than the corpus used to train the classifier.
contrasting
train_1821
Adapting a model so that it takes into consideration the specific error patterns of the non-native writers was shown to be extremely helpful in the context of discriminative classifiers (Rozovskaya and Roth, 2010c;Rozovskaya and Roth, 2010b).
this method requires generating new training data and training a separate classifier for each source language.
contrasting
train_1822
To determine typical mistakes, error statistics are collected on a small set of annotated ESL sentences.
for the model to use these language-specific error statistics, a separate classifier for each source language needs to be trained.
contrasting
train_1823
Finally, an important performance distinction between the two adapted models is the loss in recall exhibited by AP-adapted -its curve is shorter because AP-adapted is very conservative and does not propose many corrections.
nBadapted succeeds in improving its precision over nB with almost no recall loss.
contrasting
train_1824
Second, many of the systems start with the assumption that there is only one type of error.
eSL students often make several combined mistakes in one sentence.
contrasting
train_1825
The latter type of article is actually quite challenging to geolocate based on the text content: though the ship is moored in Boston, most of the page discusses its role in various battles along the eastern seaboard of the USA.
such articles make up only a small fraction of the geotagged articles.
contrasting
train_1826
When URL, LEX and BOW are removed from the set, performance does not decrease, or only slightly (lines i4, i5, i6), indicating that these three feature groups are least important.
there is significant evidence for the importance of BASE, GAZ, and MISC: removing them decreases performance by at least 1% (lines i2, i3, i7).
contrasting
train_1827
This paper is about cross-domain generalization.
the general idea of using search to provide rich context information to NLP systems is applicable to a broad array of tasks.
contrasting
train_1828
The human target, owners, is missed because intimidate was not learned.
if owner is in the selectional preferences of the learned 'human target' role, step (2) correctly extracts it into that role.
contrasting
train_1829
Our analysis revealed a serious shortcoming: as the discourse relation transitions in short texts are few in number, we have very little data to base the coherence judgment on.
when faced with even short text excerpts, humans can distinguish coherent texts from incoherent ones, as exemplified in our example texts.
contrasting
train_1830
In Text (1), a Comparison (Comp) relation would be recorded between the two sentences, irregardless of whether S 1 or S 2 comes first.
it is clear that the ordering of (S 1 ≺ S 2 ) is more coherent.
contrasting
train_1831
For instance, the implicit agent in a passive need not be "trivial" but can correspond to an actual discourse referent.
we consider these heuristics as a first step towards capturing an important discourse function of the passive alternation, namely the deletion of the agent role.
contrasting
train_1832
Table 3 shows that the proportion of active realisations for the SEM n * input is very high, and the model does not outperform the majority baseline (which always selects active).
the SEM h model clearly outperforms the majority baseline.
contrasting
train_1833
According to the global sentence overlap measures, their quality is not seriously impaired.
the design of the representations has a substantial effect on the prediction of the alternations.
contrasting
train_1834
word order and voice, by looking at different types of linguistic features and exploring different ways of labelling the training data.
our SVM-based learning framework is not well-suited to directly assess the correlation between a certain feature (or feature combination) and the occurrence of an alternation.
contrasting
train_1835
A speaker might already know the answer to a question they asked -for instance, when a teacher is verifying a student's knowledge.
in most cases asking a question represents a lack of authority, treating the other speakers as a source for that knowledge.
contrasting
train_1836
We know that there are interesting correlations between these acts and other factors, such as learning gains (Litman and Forbes-Riley, 2006) and the relevance of a contribution for summarization (Wrede and Shriberg, 2003).
adapting dialogue act tags to the question of how speakers position themselves is not straightforward.
contrasting
train_1837
Each of these fields of prior work is highly valuable.
none were designed to specifically describe how people present themselves as a source or recipient of knowledge in a discourse.
contrasting
train_1838
The contextual model described in section 4.2 performs better than our baseline constrained model.
the gains found in the contextual model are somewhat orthogonal to the gains from using ILP constraints, as applying those constraints to the contextual model results in further performance gains (and a high r 2 coefficient of 0.947).
contrasting
train_1839
A translation can potentially have many valid word orderings.
we can be reasonably certain that the ordering of the reference sentence must be acceptable.
contrasting
train_1840
In general, the collocations can be automatically identified based on syntactic information such as dependency trees (Lin, 1998).
these me-thods may suffer from parsing errors.
contrasting
train_1841
Phrase-based SMT provides a powerful translation mechanism which learns local reorderings, translation of short idioms, and the insertion and deletion of words sensitive to local context.
pBSMT also has some drawbacks.
contrasting
train_1842
Phrase-based SMT models dependencies between words and their translations inside of a phrase well.
dependencies across phrase boundaries are largely ignored due to the strong phrasal inde-German English hat er ein buch gelesen he read a book hat eine pizza gegessen has eaten a pizza er he hat has ein a eine a menge lot of butterkekse butter cookies gegessen eaten buch book zeitung newspaper dann then Table 1: Sample Phrase Table pendence assumption.
contrasting
train_1843
We observe a large amount of reordering in the automatically word aligned training text.
given only the source sentence (and little world knowledge), it is not realistic to try to model the reasons for all of this reordering.
contrasting
train_1844
Similar to N-gram based MT, it addresses three drawbacks of traditional phrasal MT by better handling dependencies across phrase boundaries, using source-side gaps, and solving the phrasal segmentation problem.
to Ngram based MT, our model has a generative story which tightly couples translation and reordering.
contrasting
train_1845
These grammars may use approaches that somewhat reduce the problem of argument-composition, leading to less significant differences between the auxiliary+verb and argument-composition analyses.
planned extensions that cover modification and sub-ordinate clauses will increase local ambiguities.
contrasting
train_1846
So, the emission and transition emanating from y n would be characterized as a PCFG rule y n → x n y n+1 .
hMMs factor rule probabilities into emission and transition probabilities: without making this independence assumption, we can model right linear rules directly: So, when we condition emission probabilities on both the current state y n and the next state y n+1 , we have an exact model.
contrasting
train_1847
On the one hand, CCM is evaluated using gold standard POS sequences as input, so it receives a major source of supervision not available to the other models.
the other models use punctuation as an indicator of constituent boundaries, but all punctuation is dropped from the input to CCM.
contrasting
train_1848
4), the models have fewer natural cues to identify constituents.
within the degrees of freedom allowed by punctuation constraints as described, the chunking models continue to find relatively good constituents.
contrasting
train_1849
Fujii and Ishikawa (2002) developed an unsupervised method to find definition sentences from the Web using 18 sentential templates and a language model constructed from an encyclopedia.
we developed a supervised method to achieve a higher precision.
contrasting
train_1850
As the examples indicate, many of the extracted paraphrases are not specific to definition sentences and seem very reusable.
there are few paraphrases involving metaphors or idioms in the outputs due to the nature of definition sentences.
contrasting
train_1851
In prior work on evaluating independent contributions in content generation, Voorhees (Voorhees, 1998) studied IR systems and showed that relevance judgments differ significantly between humans but relative rankings show high degrees of stability across annotators.
perhaps the closest work to this paper is (van Halteren and Teufel, 2004) in which 40 Dutch students and 10 NLP researchers were asked to summarize a BBC news report, resulting in 50 different summaries.
contrasting
train_1852
Finding agreement between annotated welldefined nuggets is straightforward and can be calculated in terms of Kappa.
when nuggets themselves are to be extracted by annotators, the task becomes less obvious.
contrasting
train_1853
In previous sections we gave evidence for the diversity seen in human summaries.
a more important question to answer is whether these summaries all cover important aspects of the story.
contrasting
train_1854
Using > to denote precedence of semantic groups, some commonly proposed orderings are: quality > size > shape > color > provenance (Sproat and Shih, 1991), age > color > participle > provenance > noun > denominal (Quirk et al., 1974), and value > dimension > physical property > speed > human propensity > age > color (Dixon, 1977).
correctly classifying modifiers into these groups can be difficult and may be domain dependent or constrained by the context in which the modifier is being used.
contrasting
train_1855
For example, two clusters with drastically different parts of speech are unlikely to represent the same role.
the converse is not necessarily true as part of speech similarity does not imply role-semantic similarity.
contrasting
train_1856
This means we would assign labels to 74% of instances in the dataset (excluding those discarded during argument identification) and attain a role classification with 79.4% precision (purity).
5 instead of labeling all 165, 662 instances contained in these clusters individually we would only have to assign labels to 2, 869 clusters.
contrasting
train_1857
As the above example might suggest, the availability of transductive inference for event extraction relies heavily on the known evidences of an event occurrence in specific condition.
the evidence supporting the inference is normally unclear or absent.
contrasting
train_1858
The solution is to mine credible evidences of event occurrences from global information and regard that as priori knowledge to predict unknown event attributes, such as that of cross-document and cross-event inference methods.
by analyzing the sentence-level baseline event extraction, we found that the entities within a sentence, as the most important local information, actually contain sufficient clues for event detection.
contrasting
train_1859
Most event extraction systems scan a text and search small context windows using patterns or a classifier.
recent work has begun to ex- plore more global approaches.
contrasting
train_1860
The fourth sentence describes incidental damage to civilian homes following clashes between government forces and guerrillas.
there is substantial room for improvement in each of TIER's subcomponents, and many role fillers are still overlooked.
contrasting
train_1861
Since the web information is used as a black box (including query expansion and query log analysis) which changes over time, it's more difficult to duplicate research results.
gazetteers with entities ranked by salience or major entities marked are worth encoding as additional features.
contrasting
train_1862
These could be used for various forms of indirect or distant learning, where instances in a large corpus of such pairs are taken as (positive) training instances.
such instances are noisy -if a pair of entities participates in more than one relation, the found instance may not be an example of the intended relation -and so some filtering of the instances or resulting patterns may be needed.
contrasting
train_1863
The core idea is to keep a term-label pair (T, L) only if the number of terms having the label L in the term T's cluster is above a threshold and if L is not the label of too many clusters (otherwise the pair will be discarded).
we are able to add new (high-quality) labels for a term with our evidence propagation method.
contrasting
train_1864
Early work on automatic dialogue act classification modeled discourse structure with hidden Markov models, experimenting with lexical and prosodic features, and applying the dialogue act model as a constraint to aid in automatic speech recognition (Stolcke et al., 2000).
to this sequential modeling approach, which is best suited to offline processing, recent work has explored how lexical, syntactic, and prosodic features perform for online dialogue act tagging (when only partial dialogue sequences are available) within a maximum entropy framework (Sridhar, Bangalore, & Narayanan, 2009).
contrasting
train_1865
(2010) describe a similar classlabel-based approach for query interpretation, explicitly modeling the importance of each label for a given entity.
details of their implementation were not publicly available, as of publication of this paper.
contrasting
train_1866
Increasing the number of label clusters too high, however, significantly reduces precision: CLC-HDP-LG 1000C-200L obtains only ∼51% accuracy.
comparing to CLC-DPMM 1C-40L and CLC-BASE demonstrates that the addition of label clusters and query clusters both lead to gains in label precision.
contrasting
train_1867
This result is interesting as the relative prevalence of natural language queries increases with query length, potentially degrading performance.
we did find a strong positive correlation between precision and the number of labels productions applicable to a query, i.e., production rule fertility is a potential indicator of semantic quality.
contrasting
train_1868
The label clusters are important because they capture intra-group correlations between class labels, while the query clusters are important for capturing inter-group correlations.
the algorithm is sensitive to the relative number of clusters in each case: Too many labels/label clusters rel-ative to the number of query clusters make it difficult to learn correlations (O(n 2 ) query clusters are required to capture pairwise interactions).
contrasting
train_1869
As in the case of other corpora, it is preferable that the size of a learner corpus be as large as possible where the size can be measured in several ways including the total number of texts, words, sentences, writers, topics, and texts per writer.
it is much more difficult to create a large learner corpus than to create a large native-speaker corpus.
contrasting
train_1870
Our research follows the direction of the second strand given that consistency can no longer be guaranteed by constructing another phrase table.
to categorically reuse the translations of matched chunks without any differentiation could generate inferior translations given the fact that the context of these matched chunks in the input sentence could be completely different from the source side of the fuzzy match.
contrasting
train_1871
This is reflected by an absolute 1.4 point drop in BLEU score and a 1.8 point increase in TER.
both the oracle BLEU and TER scores represent as much as a 2.5 point improvement over the baseline.
contrasting
train_1872
We also conjecture that syntactic variations were not captured by the n-gram like string-based features in Section 5.2, therefore resulting in BLEU loss, which will be investigated in future work.
cN has more potential for generating better translations, with the exception of the German-to-English direction, with scores that are usually 10 points better than simple sentence-wise reranking.
contrasting
train_1873
(2009a) is different in that both partial and full hypotheses are re-ranked during the decoding phase directly using consensus between translations from different SMT systems.
their method does not change component systems' search spaces.
contrasting
train_1874
Both of these two methods share a common limitation: they only re-rank the combined search space, without the capability to generate new translations.
by reusing hypotheses generated by all component systems in HM decoding, translations beyond any existing search space can be generated.
contrasting
train_1875
(2008) propose linear BLEU, an approximation to the BLEU score to efficiently perform MBR decoding when the search space is represented with lattices.
our hypotheses space is the full set of finite-length strings in the target vocabulary and can not be represented in a lattice.
contrasting
train_1876
The difference in the forward language model score is only 1.58, which may be offset by differences in other features in the log-linear translation model.
the difference in the backward language model score is 3.52.
contrasting
train_1877
Note that these tools' objective is to return a single lemma/stem, e.g., they would return adik for adik-beradiknya, and ajar for berpelajaran.
it was straightforward to modify them to also output the above intermediary wordforms, which the tools were generating internally anyway when looking for the final lemma/stem.
contrasting
train_1878
(2006), who use cross-lingual pivoting to generate phrase-level paraphrases with corresponding probabilities.
our paraphrases are derived through morphological analysis; thus, we do not need corpora in additional languages.
contrasting
train_1879
An early approach by Deng and Byrne (2005) changed the parameterization of the traditional word-based HMM model, modeling subsequent words from the same state using a bigram model.
this model changes only the parameterization and not the set of possible alignments.
contrasting
train_1880
As shown in Figure 2, in order to retain the phrasal link f 1 ∼ e 1 , e 2 after agreement, we need the reverse phrasal link e 1 , e 2 f 1 in the backward direction.
this is not possible in a word-based HMM where each observation must be generated by a single state.
contrasting
train_1881
The second conclusion is that the co-occurrence model outperforms the context-vector similarity.
both these approaches still perform poorly.
contrasting
train_1882
In some other cases, we obtained high precision but poor recall with one feature only, which is not a usefully result as well since most of the correct translations are still labeled as "Non-Translation".
when using both features, the precision is strongly improved up to 98% (English-Spanish or French-Spanish) with a high recall of about 90% for class T. We also achieved about 86%/75% precision/recall in the case of Chinese-English, even though they are very distant languages.
contrasting
train_1883
This situation makes the availability of multilingual lexical knowledge a necessary condition to bridge the language gap.
with the only exceptions represented by WordNet and Wikipedia, most of the aforementioned resources are available only for English.
contrasting
train_1884
As regards Wikipedia, the crosslingual links between pages in different languages offer a possibility to extract lexical knowledge useful for CLTE.
due to their relatively small number (especially for some languages), bilingual lexicons extracted from Wikipedia are still inadequate to provide acceptable coverage.
contrasting
train_1885
(2009) for a complete description of the algorithm.
we note two aspects of our implementation which are important for natural language processing applications.
contrasting
train_1886
There, disambiguation is performed using an SVM kernel that compares the lexical context around the ambiguous named entity to the content of the candidate disambiguation's Wikipedia page.
since each ambiguous mention required a separate SVM model, the experiment was on a very limited scale.
contrasting
train_1887
Wikipedia's hyperlinks offer a wealth of disambiguated mentions that can be leveraged to train a D2W system.
when compared with mentions from general text, Wikipedia mentions are disproportionately likely to have corresponding Wikipedia pages.
contrasting
train_1888
For most Germanic languages like Danish, German, or Swedish, the list of possible linking morphemes is rather small and can be provided manually.
in general, these lists can become very large, and language experts who could provide such lists might not be at our disposal.
contrasting
train_1889
While the compounding process for Germanic languages is rather simple and requires only a few linking morphemes, compounds used in Uralic languages have a richer morphology.
to the Germanic and Uralic languages, we did not observe improvements for Greek.
contrasting
train_1890
Research in Chinese word segmentation has progressed tremendously in recent years, with state of the art performing at around 97% in precision and recall (Xue, 2003;Gao et al., 2005;Zhang and Clark, 2007;Li and Sun, 2009).
virtually all these systems focus exclusively on recognizing the word boundaries, giving no consideration to the internal structures of many words.
contrasting
train_1891
Generation of flat words seen in training is trivial and deterministic since every phrase and word structure rules are lexicalized.
the generation of unknown flat words is a different story.
contrasting
train_1892
This is particularly true in the evaluation of Information Retrieval systems where, in fact, the absence of results sometimes is the worse output.
there are scenarios where we should consider the possibility of not responding, because this behavior has more value than responding incorrectly.
contrasting
train_1893
According to Column (i), a higher absolute difference is required for concluding that a system is better than another using UF.
the relative difference is similar to the one required by c@1.
contrasting
train_1894
Note that the model identifies an incorrect frame REASON for the target discrepancy.N, in turn identifying the wrong semantic role Action for the underlined argument.
the FullGraph model exactly identifies the right semantic frame, SIMILARITY, as well as the correct role, Entities.
contrasting
train_1895
Hierarchical Pitman-Yor processes (or their special case, hierarchical Dirichlet processes) have previously been used in NLP, for example, in the context of syntactic parsing (Liang et al., 2007;Johnson et al., 2007).
in all these cases the effective size of the state space (i.e., the number of sub-symbols in the infinite PCFG (Liang et al., 2007), or the number of adapted productions in the adaptor grammar (Johnson et al., 2007)) was not very large.
contrasting
train_1896
This requires a measure of generality.
while a proposition such as "PERSON does THING", has excellent generality, it possesses no discriminating power.
contrasting
train_1897
A straightforward solution for important aspect identification is to select the aspects that are frequently commented in consumer reviews as the important ones.
consumers' opinions on the frequent aspects may not influence their overall opinions on the product, and thus not influence consumers' purchase decisions.
contrasting
train_1898
Inspection of the corpus shows that approximately 80% of citations indicate agreement, meaning that for the present task the impact of discarding this information may not be large.
the primary utility in collective approaches lies in their ability to fill in gaps in information not picked up by content-only classification.
contrasting
train_1899
What we have just described is already partially addressed by the KN model -γ(v) will be relatively large for a productive history like v = in the.
it looks like the KN discounts are not large enough for productive histories, at least not in a combined history-length/class model.
contrasting