id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8700
The most widely used similarity measure in the field of natural language processing may be cosine similarity.
in the context of Twitter, the large scale of massive tweet data inevitably makes it expensive to perform cosine similarity computations among tremendous data samples.
contrasting
train_8701
For instance, 美 is likely to be an NE of Location when it refers to America.
when it expresses the sense of beautiful, it should not be an NE.
contrasting
train_8702
The basic claim leading most authors to neglect this kind of context is that, due to its assumed undifferentiated distribution, this information presents a challenge for classifiers to accurately use it in class membership decisions, which is bound to negatively affect results (see Cooke and Gillam (2008), Turney and Pantel (2010), Bullinaria and Levy (2012), among many others).
with this mainstream position, Rumshisky et al.
contrasting
train_8703
Our main claim is that devising a strategy to informatively include this type of distributional information in classification tasks can allow us to take advantage of a bigger portion of the data available in corpora and improve the accuracy of classifiers in this way.
with mainstream approaches to cue-based lexical-semantic classification, we argue for the inclusion of a type of distributional information typically not considered to be indicatory of class membership, and thus not informative to automatic classification systems.
contrasting
train_8704
According to the results, unmarked contexts allow us to gain an average of 10.2 points in recall for class members, demonstrating that they provide useful information to classifiers, which allows them to cover cases which they were not able to before, most likely due to phenomena such as data sparsity.
the impact on precision varies between classes, as the inclusion of very frequent information in the vectors representing target nouns may provide additional noise to the classifier (see Section 6.1).
contrasting
train_8705
The impact of this misleading information is made apparent by the amount of FP observed in classification results.
silence has to do with the well known problem of data sparsity, which can be caused by the particular distribution of lexical, and thus strict, though informative, contexts used in cue-based classification tasks, which are often rare in any corpus of any size due to their specificity.
contrasting
train_8706
An effect of a class collectively having a more heterogeneous linguistic behavior is that the evidence regarding each of its marks will typically be more disperse and, as a result, often not strong enough to be considered by classifiers, which explains the improvement introduced by unmarked contexts.
classes like HUM are composed of nouns that generally occur in a common set of prototypical contexts of that class.
contrasting
train_8707
Texts of Summary and Experience are unstructured information, while texts of Skills & Expertise are structured information.
some skills in the Skill & Expertise fields may not be mentioned in the Summary and Experience fields.
contrasting
train_8708
We show a concrete example for the two heterogeneous dependency trees in Figure 1, where six of the twelve dependencies are consistent in the two dependency trees (shown by the solid arcs).
differences between heterogeneous dependencies can possibly boost the evidences of the consistent dependencies.
contrasting
train_8709
, Using a beam aims to reduce complexity to O(K|A|(3n − 1)) ≈ O(n) and makes inference computationally tractable in practice.
it makes inference incomplete (the parser may fail to find a solution even if it exists) and does not guarantee the solution to be optimal.
contrasting
train_8710
We believe the LR framework can also shed light on theoretical, practical and experimental issues related to phrase structure parsing by comparison with dependency parsing.
using the LR automaton to constrain the parsing model for multi-word expressions turns out to be disappointing since it forces the parser to take less local decisions for which the beam approximation is not well suited.
contrasting
train_8711
Following the release of the PDTB, smaller corpora annotated with discourse relations have been developed for Hindi (Oza et al., 2009), Turkish (Zeyrek and Webber, 2008), Arabic (Al-Saif and Markert, 2010), and the effort is on-going with Chinese (Zhou and Xue, 2012).
for the vast majority of languages, such well-annotated resource for discourse relations is not available.
contrasting
train_8712
If an English connective is aligned to one of the Chinese connectives, we can transfer its label from English to the Chinese connective.
it is highly likely that a Chinese connective appears in the source sentence but the reference translation used an alternative expression or paraphrase rather than the 100 identified connectives in the PDTB.
contrasting
train_8713
Moreover, many studies addressed the topic diversification approach for re-ranking the retrieved results of a single query.
these approaches are not directly applicable to multiple queries.
contrasting
train_8714
The second one is the merging technique used by the above-mentioned Watson system (Budzik and Hammond, 2000), which uses Round robin merging, hence it is noted Round-robin.
our proposed method, DivM, is a diverse merging technique which we now proceed to define formally.
contrasting
train_8715
When comparing Round-robin versus SimM, the scores show the superiority of the former method when the number of conveyed topics in fragments is higher than the number of recommended documents, because it provides a diverse lists of documents in which documents relevant to less important topics are not displayed.
when the number of topics is smaller than the number of recommendations, SimM provides better results.
contrasting
train_8716
Implicit relations between two text spans are inferred by the reader even if they are not explicitly connected through lexical cues.
explicit relations are explicitly identified with syntactically well-defined terms, so called discourse markers or discourse connectives (DCs).
contrasting
train_8717
Although Versley (2010) used a list of DCs in generating the dataset, he also tried to automatically induce the DCs from his corpus.
versley (2010) did not explicitly evaluate his list of DCs, but rather focused on his parser.
contrasting
train_8718
In other words, by choosing a threshold for LLR, we can label each potential DC candidate as "DC" if its LLR is above the threshold or "non-DC" otherwise.
choosing the LLR threshold depends on the application and there is no principled way to determine an ideal value for the threshold.
contrasting
train_8719
For example, when evaluating the candidate "à ce point", we have to label it as a wrong DC because it is not repertoried in LEXCONN.
it is a segment of the French DC "à ce point que" and only one word is missing in the expression.
contrasting
train_8720
(2010) suggested to consider a partial collocation as a true positive, since it signals the presence of the longer collocation.
this "was not a decision that human evaluators were comfortable with" (Kilgarriff et al., 2010).
contrasting
train_8721
Syntax, for example, is strongly constrained by meter.
additional features like meter and rhyme properties might be useful.
contrasting
train_8722
Theoretically, it would be possible to assign all songs a rating by transferring an album rating to all songs in the album.
in practice this is difficult to do robustly because album ratings and lyrics come from different sites and are not trivial to align.
contrasting
train_8723
The n-gram model hones in on the topic of a text, while the extended model captures more abstract structural and stylistic properties.
both perform similarly on individual genres, i.e., they both in themselves capture important aspects of 'genre'.
contrasting
train_8724
By inspecting the confusion tables, we discovered that old and new are separated well from each other, with new being classified as old in only 23% of the cases and the opposite happening in 17% of the cases.
mid-age shows the lowest F-Score and is misclassified in an almost symmetrical way (new: 27%, mid-age: 41%, old: 32%).
contrasting
train_8725
The above discourse relation labeling tasks are done on the datasets of different size for different languages at the intra-/inter-sentential levels, thus the results cannot be compared directly.
these works show a tendency: discourse connectives are useful clues for explicit discourse relation recognition, and the uses of Chinese connectives in discourse relation labeling are more challenging than those of English connectives.
contrasting
train_8726
The formulation of the log-likelihood ratio in Dunning (1993) is a two-tailed statistical test that if p 1 and p 2 significantly diverge from each other, the −2 log λ would get a high value.
as mentioned above, we are just interested in the cases that p 1 is much higher than p 2 , because, otherwise, coreference links among the mentions which have the relation r in common are less frequent than expected.
contrasting
train_8727
We also try to incorporate the question text into the above estimation formula.
it does not result in any improvement.
contrasting
train_8728
Another research perspective on question answering service is quality prediction including answer quality prediction (Harper et al., 2008;Shah and Pomerantz, 2010;Severyn and Moschitti, 2012;Severyn et al., 2013) and question quality prediction (Anderson et al., 2012).
since the methods mentioned above are based on the history data, the system will experience the cold start problem.
contrasting
train_8729
One simple way to include phrases in topic modelling is to treat each phrase as a single term.
this method is not ideal because the meaning of a phrase is often related to its composite words.
contrasting
train_8730
It is because replacing many words with phrases decreases the number of co-occurrences in the corpus.
lDA(w_p) is slightly better than the other two baselines on most domains because some frequent phrases add more reliable co-occurrences in the corpus.
contrasting
train_8731
In contrast, LDA(w_p) is slightly better than the other two baselines on most domains because some frequent phrases add more reliable co-occurrences in the corpus.
as we point out in the introduction, some problems still exist.
contrasting
train_8732
Statistical tests show that our proposed method, LDA(p_GPU), outperforms all other three methods significantly (p < 0.05) using both top 15 and top 30 terms.
there's no significant improvement between any pair of the three baselines.
contrasting
train_8733
Then one infers that screen is an opinion target according to Assumption 1 (whether screen is correct is not checked).
in Example 2(a), we can see that good is an opinion word and it modifies thing, but thing is not related to This work is licenced under a Creative Commons Attribution 4.0 International License.
contrasting
train_8734
opinion words or targets) for supervision, which are regarded as positive labeled examples for classification.
negative labeled examples (i.e.
contrasting
train_8735
At the same time, happy is a correct opinion word, so the whole expression happy day also has a small reconstruction score and then be misclassified.
the reconstruction score of happy day from OCDNN is quite large so the phrase is dropped.
contrasting
train_8736
Obviously, the training data constructed based on this method is not perfect.
since this method can effectively generate a great quantity of data, we think that general characteristics can be modeled with the generated training data.
contrasting
train_8737
The size of training data of different targets vary greatly in the dataset.
comparing with other method, the proposed method is the most stable one.
contrasting
train_8738
(2012) proposed to use adaptive method for this task.
most of these methods focused on the text with predefined surface words.
contrasting
train_8739
This is consistent with the conclusions of the work cited: if the reciprocity condition is applied too strictly, it does not improve the nearest neighbor lists over all the words.
s-norms seem better able to take advantage of the ranking.
contrasting
train_8740
On the one hand, mention-pair systems classify two mentions in a text as coreferent or not, by using a feature vector obtained from this pair of mentions.
entity-centric approaches determine if a mention (or a partial entity) belongs to another partial entity, using features from other mentions of the same (partial) entities.
contrasting
train_8741
On the one hand, the entity-centric approach allows the system to use all the features of an entity when a mention is evaluated.
the multi-pass model dynamically enriches an entity (with new features) in every iteration.
contrasting
train_8742
Note that if our probabilities are calculated from the full set of n-gram counts for the corpus being segmented and the set of possible segmentations S is not constrained, a segmentation with a smaller number of breaks will generally be preferred over one with more breaks.
in practice we will be greatly constraining S and also using probabilities based on only a subset of all the information in the corpus.
contrasting
train_8743
Unsurprisingly, the DOCREP binary representation does not compress as well as textual serialisation formats with lots of repetition, such as XML or the original stand-off annotation files.
under all of these reported situations, apart from the UIMA compressed binary format, our DOCREP representation is two to five times smaller than its UIMA counterpart, and 15 times smaller than the representation in MySQL.
contrasting
train_8744
We recorded both initialisation and rule application time for the two programs, via instrumentation in case of fomacg-proc and by running the grammar first on an empty file and then on the test corpus in case of cg-proc.
as initialisation is a one-time cost, in the following we are mainly concerned with the time required for applying rules.
contrasting
train_8745
(Clearly, for grammars with several sections, instead of a single tree that contains all rules, one tree must be built for each section to preserve rule priorities.
this does not affect the reasoning above).
contrasting
train_8746
It can be seen that hierarchical rule testing indeed improves performance: even a single level of merging results in 30-42% speedup.
it is also immediately evident that aside from special cases, the disadvantages overweight the benefits: memory usage and binary size grow exponentially, affecting compilation and grammar loading time as well, and very soon we run into the limits of physical memory.
contrasting
train_8747
The paper reports a binary size similar to the original grammar size.
as the framework breaks away from the practice of direct rule application followed in this paper and in related literature (Hulden, 2011;Peltonen, 2011), closer inspection remains as future work.
contrasting
train_8748
However, due to the limitation of scale and genre coverage of labeled data, it is very difficult to further improve the performance of supervised parsers.
it is very time-consuming and labor-intensive to manually construct treebanks.
contrasting
train_8749
Similar phenomena exist in Japanese dialogues.
most pronouns are omitted (called zeropronouns), and zero-anaphora resolution is necessary for Japanese PASA.
contrasting
train_8750
Namely, anaphora resolution across multiple sentences is important to dialogue analysis.
most arguments and the predicate appear in the same sentence in the accusative/dative cases of newspapers.
contrasting
train_8751
These methods have been shown effective on words.
the number of features is much larger than the vocabulary size, which makes it infeasible to apply them on features.
contrasting
train_8752
These works used manual techniques for identifying the type of support in user messages and hence, are limited to a small number of messages as compared to the real world data.
the current work builds machine learning classifiers that can automatically predict the type of support in messages.
contrasting
train_8753
Users with momentous activities will attract many other users to be connected with.
no body will be interested in users with trivial or insignificant behaviors.
contrasting
train_8754
Yu and Lam (2008) proposed an integrated probabilistic and logic approach based on Markov Logic Networks (MLNs) (Richardson and Domingos, 2006) to encyclopedia relation extraction.
this modeling only captures single relation extraction task.
contrasting
train_8755
(2013) presented a semi-supervised graph-based approach to joint Chinese word segmentation and POS tagging.
none of these models has been investigated or applied to social media and social network analysis.
contrasting
train_8756
In other words, 75% users form high density of relationship ties and the average clustering coefficient (ACC) is high (0.61).
the tie density of the remaining 25% users is much lower, since the ACC of these users is only 0.18.
contrasting
train_8757
The aspects "iFixit" and "repairability" refer to the unveiling of the iFixit repairability report for the Surface in February.
we also see more traditional product feature or attributes that are not correlated to external events but are key discussion points across multiple months, such as "keyboard" or "touch cover".
contrasting
train_8758
For example, in several papers, frequently occurring noun phrases is used as the building block for detecting aspects (Hu and Liu, 2004a;Hu and Liu, 2004b;Ku et al., 2006).
for microblogs, frequency of a noun phrase alone is an insufficient indicator of an aspect, due to the inherent noise (unlike reviews, microblog posts are short and often not as focused) and redundancy (e.g., due to retweeting in the context of Twitter).
contrasting
train_8759
The aspect 'lomborg', referring to Bjorn Lomborg was the subject of much discussion in March; With his article in WSJ on heavy carbon-di-oxide emissions from electric cars charging, Lomborg created a stir among environmentalists.
the top ranking aspects for Hyundai on Twitter corresponded mostly to chatter about various car models.
contrasting
train_8760
If a news sentence is referred or paraphrased by many tweets, it is assumed to be indicated as more important.
a tweet, besides its local importance indicator, may be more important if it is similar to the theme of the news content.
contrasting
train_8761
-LexRank (news) performs the second best in task 1.
the performance of LexRank (tweets) is the worst in task 2.
contrasting
train_8762
Intuitively, the latter is helpful if an author has a concise way of expressing herself so that the concatenated document allows to extract a statistic that is sufficient for capturing her style.
instance-based approaches are better suited for expressive authors and have advantages in sparse data scenarios.
contrasting
train_8763
Thus, MKL and SSAD with function words and BoW kernel confirm todays assumption that all 12 papers have been written by Madison.
choosing SSAD as the base classifier in the absence of prior knowledge leaves much room for interpretations and the user in the need of deciding between three solutions, depending on which kernel she prefers.
contrasting
train_8764
Our proposed method of text aesthetics prediction is similarly based on extracting characteristic features from the text passages.
in the case of literature, it is worth mentioning that in contrast to image aesthetics it is more difficult to describe the subtle attributes which differentiate an aesthetically pleasing text from its counterpart.
contrasting
train_8765
In general, automatic extraction system has better coverage but less accuracy compared to YAGO based system.
automatic extraction to background knowledge may help in real applications by improving coverage greatly.
contrasting
train_8766
In early stage, most researches rely on the similarity between the context of the mention and the definition of candidate entities by proposing different measuring criteria such as dot product, cosine similarity, KL divergence, Jaccard distance and more complicated ones (Bunescu and Pasca, 2006;Cucerzan, 2007;Zheng et al., 2010;Hoffart et al., 2011;Zhang et al., 2011).
these methods mainly rely on text similarity but neglect the internal structure between mentions.
contrasting
train_8767
The merit of such retrieval-based approaches is that, owing to the diversity of the web, systems can retrieve at least some responses for user input, which solves the coverage problem.
this comes at the cost of utterance quality.
contrasting
train_8768
Since the initial publication of this model, a number of extensions have been proposed, the majority of which are focused on enriching the original feature set.
these enriched feature sets are usually application-specific, i.e., it requires a certain expertise and intuition to conceive good features.
contrasting
train_8769
Moreover, for sentence ordering, combining our model with entity-based transition features achieves the best performance.
for essay scoring, the combination is detrimental.
contrasting
train_8770
In our re-implementation of B&L, we use the same parameter settings as B&L's original model, i.e., the optimal transition length k = 3 and the salience threshold l = 2.
when extracting entities in each sentence, e.g., dollar, yesterday, etc., we do not perform coreference resolution; rather, for better coverage, we follow the suggestion of Elsner and Charniak (2011) and extract all nouns (including non-head nouns) as entities.
contrasting
train_8771
We encode the RST-style discourse relations in a similar fashion to PDTB-style encoding.
since the definition of discourse roles depends on the particular discourse framework, here, we adapt Lin et al.
contrasting
train_8772
Because the RST-style discourse parser we use is trained on a fraction of the WSJ corpus, we remove the training texts from our dataset, to guarantee that the discourse parser will not perform exceptionally well on some particular texts.
since the PDTB-style discourse parser we use is trained on almost the entire WSJ corpus, we cannot do the same for the PDTB-style parser.
contrasting
train_8773
Still, on a per-question basis, for the medium and difficult questions where the pupils used the search engine, they slightly improved their performance.
the sample and the effect size are too small to draw any reliable conclusions.
contrasting
train_8774
Again, the result supports our prediction that usage search engines can help resolve writing uncertainties.
a deeper analysis reveals that the large effect is due to the synonym operator.
contrasting
train_8775
All these approaches rely on supervised training data to train the normalization model.
we use an unsupervised approach to learn the normalization lexicon of word forms in SMS to standard text.
contrasting
train_8776
The unsupervised normalization lexicon learning using deep learning performs a good job of learning SMS shorthands.
the induced lexicon contains only one-to-one word mappings.
contrasting
train_8777
In the absence of segmentation, the cached table was used for 12.8% and 14.4% of the total phrases for English-Spanish and Spanish-English, respectively.
with phrase segmentation the cached table was used for 29.2% and 39.2% of total phrases.
contrasting
train_8778
Both corpora were used in the Automatic Content Extraction (ACE) technology evaluation, at the coarse-grained level only.
these corpora are governed by a costly annual license, which prevents the researcher from accessing and utilising them.
contrasting
train_8779
When it comes to the newswire corpus, this is due to differences in the way the NE phrases are written in a newswire domain.
the boundaries of multi-word NE phrases are difficult to detect, in Arabic, due to the fact that the language has a complex morphology.
contrasting
train_8780
Thus far, this study has examined the window-based and dependency-based representation of evidence, in order to increase the performance of the classification process.
there is still room for improvement.
contrasting
train_8781
The goal of this experiment was to evaluate the usefulness of injecting the clustering information from Brown algorithm into the supervised model.
the actual size of the corpora mentioned in section 2.3 is too small to apply the Brown algorithm.
contrasting
train_8782
The state of the art performance, using distant supervision (Li et al., 2012), achieves an end-to-end Avg-Q score of 0.678 (on training data), where we achieve 0.679 (on evaluation data).
our scores are not directly comparable since we reduce the number of classes (and the amount of evidence) in our evaluation.
contrasting
train_8783
It seems that the features are naï ve.
these three kinds of features are the most important components of Kazakh language, and they reflect the characteristic of Kazakh language.
contrasting
train_8784
This work put maximum entropy model into recognition of basic Kazakh phrase.
there are still space for improvement on scale and accuracy rate comparing to English and Chinese.
contrasting
train_8785
For instance, if cross-validation is done over all pairs in BLESS in the U+W2 300 space, Concat achieves .98 accuracy, while Diff obtains .90.
in this setting the same words appear in the training and test sets (albeit in different pairs).
contrasting
train_8786
For example, and will occur beside several other words like school, elephant and pipe with more or less equally distributed co-occurrence counts with all of these words.
the co-occurrence distribution of school will be skewed, with more bias towards to, high and bus than over, through and coast, with the list of words occurring beside school also being much smaller than that for and.
contrasting
train_8787
Data sparsity is expected to lead to low performance.
the correct analysis of compound nouns is important for a number of NLP tasks, for example in machine translation (Bouillon et al., 1992;Rackow et al., 1992;Johnston and Busa, 1999;Navigli et al., 2003).
contrasting
train_8788
For example, Roberts (2003) proposes that the combination of uniqueness and a presupposition of familiarity underlie all definite descriptions.
possessive definite descriptions (John's daughter) and the weak definites (the son of Queen Juliana of the Netherlands) are neither unique nor necessarily familiar to the listener before they are spoken.
contrasting
train_8789
However, possessive definite descriptions (John's daughter) and the weak definites (the son of Queen Juliana of the Netherlands) are neither unique nor necessarily familiar to the listener before they are spoken.
to the reductionist approaches are approaches to grammaticalization (Hopper and Traugott, 2003) in which grammar develops over time in such a way that each grammatical construction has some prototypical communicative functions, but may also have many non-prototypical communicative functions.
contrasting
train_8790
'He asked that, looking into her eyes' Russian verbal adverbs are similar to participial constructions in adverbial usage (or gerunds) existing in a variety of languages.
they also exhibit significant differences.
contrasting
train_8791
He could have asked his son/daughter to invite Mary for dinner.
the phrase I called Mary at the request of my father is still appropriate provided the act of inviting Mary contains calling her as its essential part.
contrasting
train_8792
Like any verb, buy and sell are supplied with subcategorization frames (aka government patterns in the Meaning -Text approach) that list all their arguments and their means of expression.
their being conversives implies that their lexical functional description should indicate the correlation between their argument structures.
contrasting
train_8793
When we pass from the keyword to such a derivative, we may find that an actant either stays in its initial position (teach mathematicsteacher of mathematics), or changes its number (the verb dominates the preposition -the preposition depends on the verb), or gets blocked altogether (drive home -*driver home).
the syntactic position of the actant can only change in a very limited way.
contrasting
train_8794
Here we will only be interested in one of these arguments -that of the whole, expressed prototypically by preposition iz 'of' as represented in phrases the majority of cases, most of the students.
in sentences with AdvD v bol'šinstve 'mostly, for the most part' this valence slot is filled, as a rule, by the subject of the dominating verb: (22) Oni byli arestovany i podverglis' v bol'šinstve svoem ssylke v Gvianu i na Sejšel'skie ostrova 'they were arrested and mostly exiled to Guiana or Seychelles' [= 'most of them were exiled…'] this is not the only possible syntactic role for this actant.
contrasting
train_8795
Another similarity metric that can be used is LM Perplexity.
in the current scenario we do not have resources (training data) to build a source side LM for computing the perplexity.
contrasting
train_8796
The main reason for this is because the development set used to train the LM is relatively small, at only 1166 sentences.
as our goal in this paper is to perform error analysis on set of data which we already have parallel references (in this case, the development set), the generalization ability of the model is not necessarily fundamental to our task at hand.
contrasting
train_8797
This is a natural result, as the BLEU metric puts a heavier weight on the brevity penalty assigned to shorter translations.
the RIBES-optimized LM detects more reordering errors than the BLEU-optimized LM.
contrasting
train_8798
Furthermore, their results are from a much bigger beam (10 times larger than their baseline), so it is not clear which factor contributes more, the larger beam size or the Slm.
our approach gains significant improvements over a state-of-theart tree-to-string baseline at a reasonable speed, about 6 times slower.
contrasting
train_8799
(Li and Sun, 2009;Jiang et al., 2013) utilized the massive manual natural annotations or punctuation information on the Internet to improve the performance of CWS.
these natural annotations are just partial annotations and their roles depend on the qualities of the selected resource, such as Wikipedia.
contrasting