id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_1700
However, not all words can be cleanly classified into domain specific or domain independent, and this process is conducted prior to training a classifier.
our method lets a particular lexical entry to be listed as a neighour for multiple base entries.
contrasting
train_1701
The semantic component of our model learns word vectors via an unsupervised probabilistic model of documents.
in keeping with linguistic and cognitive research arguing that expressive content and descriptive semantic content are distinct (Kaplan, 1999;Jay, 2000;Potts, 2007), we find that this basic model misses crucial sentiment information.
contrasting
train_1702
For instance, for the following tweet, which contains only three words, it is difficult for any existing approaches to classify its sentiment correctly.
relations between individual tweets are more common than those in other sentiment data.
contrasting
train_1703
Finally, a review is classified based on the average semantic orientation of the sentimental phrases in the review.
(Pang et al., 2002) treat the sentiment classification of movie reviews simply as a special case of a topic-based text categorization problem and investigate three classification algorithms: Naive Bayes, Maximum Entropy, and Support Vector Machines.
contrasting
train_1704
Many attempts have been made to extract these expressions from corpora, mainly using automated methods that exploit statistical means.
to our knowledge, no reliable, extensive solution has yet been made available, presumably because of the difficulty of extracting correctly without any human insight.
contrasting
train_1705
Without a rule system of semantic composition, it is difficult to evaluate the validity of the JDMWE concerning idiomaticity.
we can confirm that 3,600 Japanese standard idioms that Sato (2007) listed from five Japanese idiom dictionaries published for human readers are included in the JDMWE as a proper subset.
contrasting
train_1706
Although the CLC contains information about the linguistic errors committed (see Section 2), we try to extract an error-rate in a way that doesn't require manually tagged data.
we also use an error-rate calculated from the CLC error tags to obtain an upper bound for the performance of an automated error estimator (true CLC error-rate).
contrasting
train_1707
It is likely that a larger training set and/or more consistent grading of the existing training data would help to close this gap.
our system is not measuring some properties of the scripts, such as discourse cohesion or relevance to the prompt eliciting the text, that examiners will take into account.
contrasting
train_1708
In addition, the metric was shown to correlate well with human judgments.
a significant drawback of this approach is that PEM requires substantial in-domain bilingual data to train the semantic adequacy evaluator, as well as sample human judgments to train the overall metric.
contrasting
train_1709
For example, a paraphrase suggestion tool for a word processing software might be more concerned with semantic adequacy, since presenting a paraphrase that does not preserve the meaning would likely result in a negative user experience.
a query expansion algorithm might be less concerned with preserving the precise meaning so long as additional relevant terms are added to improve search recall.
contrasting
train_1710
We have so far used BLEU to measure semantic adequacy since it is the most common MT metric.
other more advanced MT metrics that have shown higher correlation with human judgments could also be used.
contrasting
train_1711
broadcasting news, force-protection, travel) is acceptable to users.
a remaining open question is how to predict confidence scores for machine translated words and sentences.
contrasting
train_1712
Experiments reporting in Section 4 indicate that the proposed confidence measure has a high correlation with HTER.
it is not very clear if the core MT system can benefit from confidence measure by providing better translations.
contrasting
train_1713
Exponential models and other classifiers have been used in several recent studies to condition phrase model probabilities on source-side context (Chan et al 2007;Carpuat and Wu 2007a;Carpuat and Wu 2007b).
this has been generally accomplished by training independent classifiers associated with different source phrases.
contrasting
train_1714
One can see that for both rule sets the estimated probabilities for rules observed a single time is only slightly higher than probabilities for generated unobserved rules.
rules with relatively high counts in the second set receive proportionally higher estimates, while the difference between the singleton rule and the most frequent rule in the second set, which was observed 3 times, is smoothed away to an even greater extent.
contrasting
train_1715
With others, like ("even, number") and ("book, list"), the users seem to be encouraging collocations to be in the same topic.
the collocations may not be in any document in this corpus.
contrasting
train_1716
As presented here, the technique for incorporating constraints is closely tied to inference with Gibbs sampling.
most inference techniques are essentially optimization problems.
contrasting
train_1717
In our implementations, we found that the speed improvement from switching to a first-rest encoding and implementing more efficient queries was modest.
as we will see in Section 4.2, the last-rest encoding allows us to exploit the scrolling nature of queries issued by decoders, which results in speedups that far outweigh those achieved by reversing the trie.
contrasting
train_1718
In the previous section, we provided compact and efficient implementations of associative arrays that allow us to query a value for an arbitrary n-gram.
decoders do not issue language model requests at random.
contrasting
train_1719
In many games, it is sufficient to maintain a distinct action-value for each unique state and action in a large search tree.
when the branching factor is large it is usually beneficial to approximate the action-value function, so that the value of many related states and actions can be learned from a reasonably small number of simulations (Silver, 2009).
contrasting
train_1720
Given sentence y i and its dependency parse q i , we model the distribution over predicate labels e i as: Here e j is the predicate label of the j th word being labeled, and e 1:j−1 is the partial predicate labeling constructed so far for sentence y i .
in the second layer of the neural network, the units z represent a predicate labeling e i of every sentence y i ∈ d. our intention is to incorporate, into action-value function Q, information from only the most relevant sentence.
contrasting
train_1721
IR views language as an open-ended set of mostly stable signs with which texts can be indexed and retrieved, focusing more on a text's potential relevance than its potential meaning.
fLP views language as a system of unstable signs that can be used to talk about the world in creative new ways.
contrasting
train_1722
The system can be accessed at: www.educatedinsolence.com/aristotle Our retrieval goals in IR are often affective in nature: we want to find a way of speaking about a topic that expresses a particular sentiment and carries a certain tone.
affective categories are amongst the most cross-cutting structures in language.
contrasting
train_1723
Fishlov would dub these non-poetic similes (NPS).
the query "?P @P" will retrieve corpus-attested elaborations of stereotypes in @P to suggest similes of the form "X is as P as P 1 S", where P 1 ∈ ?P.
contrasting
train_1724
Using the same clustering algorithm over this feature set, @ X achieves a clustering accuracy (as measured via cluster purity) of 70.2%, compared to 71.96% for Almuhareb and Poesio.
when @ X is used to harvest a further set of attribute nouns for X, via web queries of the form "the P * of X " (where P ∈ @X), then @ X augmented with this additional set of attributes (like hands for surgeon) produces a larger space of 7,183 features.
contrasting
train_1725
The AA problem can be naturally posed as one of single-label multiclass classification, with as many classes as candidate authors.
unlike usual text categorization tasks, where the core problem is modeling the thematic content of documents (Sebastiani, 2002), the goal in AA is modeling authors' writing style (Stamatatos, 2009b).
contrasting
train_1726
Acceptable performance in AA has been reported with character n-gram representations.
as with word-based features, character n-grams are unable to incorporate sequential information from documents in their original form (in terms of the positions in which the terms appear across a document).
contrasting
train_1727
LOWBOW representations have also been applied to discourse segmentation (AMIDA, 2007) and have been suggested for text summarization (Das and Martins, 2007).
to the best of our knowledge the use of the LOWBOW framework for AA has not been studied elsewhere.
contrasting
train_1728
As expected, different kernels yield different results.
the diffusion kernel outperformed most of the results obtained with other kernels; confirming the results obtained by other researchers Lafferty and Lebanon, 2005 Table 3: Authorship attribution accuracy when using bags of local histograms and different kernels for word-based and character-based representations.
contrasting
train_1729
For these authors sequential information seems to be particularly helpful.
low recognition performance was obtained for authors BL (B. K. Lim) and JM (J. MacArtney).
contrasting
train_1730
Indeed, in the absence of gold-standard data, related studies (see Section 2) have been forced to utilize ad hoc procedures for evaluation.
one contribution of the work presented here is the creation of the first largescale, publicly available 6 dataset for deceptive opinion spam research, containing 400 truthful and 400 gold-standard deceptive reviews.
contrasting
train_1731
We observe that automated classifiers outperform human judges for every metric, except truthful recall where JUDGE 2 performs best.
16 this is expected given that untrained humans often focus on unreliable cues to deception (Vrij, 2008).
contrasting
train_1732
This paper tackles the task of bilingual sentiment analysis.
to previous work, we (1) assume that some amount of sentimentlabeled data is available for the language pair under study, and (2) investigate methods to simultaneously improve sentiment classification for both languages.
contrasting
train_1733
We see that the performance of the proposed approach improves steadily by adding more and more unlabeled data.
even with only 2,000 unlabeled sentence pairs, the proposed approach still produces large performance gains.
contrasting
train_1734
For example, (Hu and Liu, 2004;Nishikawa et al., 2010) have explored opinion summarization in review domain, and (Paul et al., 2010) summarizes contrastive viewpoints in opinionated text.
opinion summarization in spontaneous conversation is seldom studied.
contrasting
train_1735
This debate frame is often not appropriate for contentious issues for similar reasons as the positive/negative frame.
our method does not assume a fixed debate frame, and rather develops one based on the opponents of the contention at hand.
contrasting
train_1736
Several works have used the relation between speakers or authors for classifying their debate stance (Thomas et al., 2006, Agrawal et al., 2003.
these works also assume the same debate frame and use the debate corpus, e.g., floor debates in the House of Representatives, online debate forums.
contrasting
train_1737
Thus, building a domain specific NER for tweets is necessary, which requires a lot of annotated tweets or rules.
manually creating them is tedious and prohibitively unaffordable.
contrasting
train_1738
In our work, gazetteers prove to be substantially useful, which is consistent with the observation of Ratinov and Roth (2009).
the gazetteers used in our work contain noise, which hurts the performance.
contrasting
train_1739
Similarly, to study the effectiveness of the CRF model, it is replaced by its alternations, such as the HMM labeler and a beam search plus a maximum entropy based classifier.
to what is reported by Ratinov and Roth (2009), it turns out that the CRF model gives remarkably better results than its competitors.
contrasting
train_1740
In this way, we generate dependency-based features for context words such as seee "see" in (seee, flm, +2) (based on the target word flm in the context of flm to seee).
expanded dependency features may introduce noise, and we therefore introduce expanded dependency weights w d ∈ {0.0, 0.5, 1.0} to ameliorate the effects of noise: a weight of w d = 0.0 means no expansion, while 1.0 means expanded dependencies are indistinguishable from non-expanded (strict match) dependencies.
contrasting
train_1741
Twitter has been shown to be useful in a number of applications, including tweets as social sensors of realtime events (Sakaki et al., 2010), the sentiment prediction power of Twitter (Tumasjan et al., 2010), etc.
current explorations are still in an early stage and our understanding of Twitter content still remains limited.
contrasting
train_1742
Then she chooses a bag of words one by one based on the chosen topic.
not all words in a tweet are closely related to the topic of that tweet; some are background words commonly used in tweets on different topics.
contrasting
train_1743
The topic-specific preference value P t (w) for each word w is its random jumping probability with the constraint that w∈V P t (w) = 1 given topic t. A large R t (•) indicates a word is a good candidate keyword in topic t. We denote this original version of the Topical PageRank as TPR.
the original TPR ignores the topic context when setting the edge weights; the edge weight is set by counting the number of co-occurrences of the two words within a certain window size.
contrasting
train_1744
We found λ = 0.1 was the optimal parameter for cTPR and TPR.
tPR is more sensitive to λ.
contrasting
train_1745
This idea of consensus-based extraction is also central to our method.
we incorporate this idea into our model by simultaneously clustering output and labeling documents rather than performing the two tasks in serial fashion.
contrasting
train_1746
They find significant improvement using this method on data sets consisting of Dutch first names, family names, and geographical names.
it is unclear whether such an approach would be able to improve the performance of the current state-of-the-art G2P systems.
contrasting
train_1747
In the training data, many of the sentences are questions directed to a second person, you.
chinese questions do not invert and the subject remains in the canonical first position, thus the transition from the start of sentence to you is highly weighted.
contrasting
train_1748
We propose a language-independent method for the automatic extraction of transliteration pairs from parallel corpora.
to previous work, our method uses no form of supervision, and does not require linguistically informed preprocessing.
contrasting
train_1749
These systems used a manually labelled set of data for initial supervised training, which means that they are semi-supervised systems.
our system is fully unsupervised.
contrasting
train_1750
If we leave open a cell which contains no edges in the maximum likelihood (ML) parse, we incur the cost of additional processing, but are still able to recover the ML tree.
if we close a chart cell which contains an ML edge, search errors occur.
contrasting
train_1751
Finally, the size and structure of the grammar is the single largest contributor to parse efficiency.
to the current paradigm, we plan to investigate new algorithms that jointly optimize accuracy and efficiency during grammar induction, leading to more efficient decoding.
contrasting
train_1752
Query optimization for logic programming is NP-complete since query optimization for even simple conjunctive database queries is NP-complete (Chandra and Merlin, 1977).
the fact that variables in queries arising from LCFRS rules correspond to the endpoints of spans in the string to be parsed means that these queries have certain structural properties (Gildea, 2011).
contrasting
train_1753
Since |V 2 | ≥ 4r 2 , this implies that at least r columns are fully contained in V 2 .
at least 4r +7−(3r −1) = r +8 columns are fully contained in V 1 .
contrasting
train_1754
In the inference rule in figure 1 there are 2(r + 1) variables that can be bound to positions in w 1 , and as many that can be bound to positions in w 2 .
the side conditions imply m ij = m ij + |u ij |, for i ∈ {1, 2} and 0 ≤ j ≤ r, and therefore the number of free variables is only r + 1 for each component.
contrasting
train_1755
In the case of context-free grammars, this is usually circumvented by casting the rules in binary form prior to epsilon rule elimination.
this is not possible in our case, since SCFGs do not allow normal forms with a constant bound on the length of the right-hand side of each component.
contrasting
train_1756
The same holds for the computation of p prefix Secondly, the computation of p prefix G ([v 1 , v 2 a]) for all possible a ∈ Σ may be impractical.
one may also compute the probability that the next part-of-speech in the target translation is A.
contrasting
train_1757
In our case both the parser and the supertagger are feature-based models, so from the perspective of a single parse tree, the change is simple: the tree is simply scored by the weights corresponding to all of its active features.
since the features of the supertagger are all Markov features on adjacent supertags, the change has serious implications for search.
contrasting
train_1758
This occurs because DD continues to find higher scoring parses at each iteration, and hence the results change.
for BP, even if the marginals have not converged, the minimum risk solution turns out to be fairly stable across successive iterations.
contrasting
train_1759
Online learning algorithms like perceptron or the margin-infused relaxed algorithm (MIRA) (Crammer and Singer, 2003) are frequently used for structured problems where Viterbi inference is available.
we find that such methods are unstable on our problem.
contrasting
train_1760
", "starbucks" and "coffee" are more general words given the document clusters compared to "joint" and "ventures" (see Fig.2), because they appear more frequently in document clusters.
ttM has no way of knowing that "starbucks" and "coffee" are common terms given the context.
contrasting
train_1761
Because HybHSum uses the human generated summaries as supervision during model development and our systems do not, our performance is quite promising considering the generation is completely unsupervised without seeing any human generated summaries during training.
the R-2 evaluation (as well as R-4) w/ stop-words does not outperform other models.
contrasting
train_1762
n}, where i is an index corresponding to a particular entity pair (e j , e k ), x i contains all of the sentences with mentions of this pair, and y i = relVector(e j , e k ).
given this form of supervision, we would like to find the setting for θ with the highest likelihood: this objective would be difficult to optimize exactly, and algorithms for doing so would be unlikely to scale to data sets of the size we consider.
contrasting
train_1763
Although they do sampling for inference, the global aggregation variables require reasoning about an exponentially large (in the number of sentences) sample space.
our approach required approximately one minute to train and less than one second to test, on the same data.
contrasting
train_1764
Conceptually, this is a rather simple approach as all spans of texts are treated uniformly and are being mapped to one of several relation types of interest.
these approaches to RE require a large amount of manually annotated training data to achieve good performance, making it difficult to expand the set of target relations.
contrasting
train_1765
Whether this is correct or not is debatable.
to avoid being penalized when our RE system actually correctly predicts the label of an implicit relation, we take the following approach.
contrasting
train_1766
Some of the reasons are due to possibly multiple prepositions in between a mention pair, preposition sense ambiguity, pp-attachment ambiguity, etc.
in general, we observe that inferring the structures allows us to discard a large portion of the mention pairs which have no valid relation between them.
contrasting
train_1767
Recent work on bilingual Word Sense Disambiguation (WSD) has shown that a resource deprived language (L 1) can benefit from the annotation work done in a resource rich language (L 2) via parameter projection.
this method assumes the presence of sufficient annotated data in one resource rich language which may not always be possible.
contrasting
train_1768
Ideally, this threshold (0.6) should have been selected using a development set.
since our work focuses on resource scarce languages we did not want to incur the additional cost of using a development set.
contrasting
train_1769
The benefit of bilingual bootstrapping is clearly felt for small seed sizes.
as the seed size increases the performance of the 3 algorithms, viz., MonoBoot, BiBoot and OnlySeed is more or less the same.
contrasting
train_1770
assume the existence of an oracle which provides the senses of each word.
conceptResolver automatically determines the number of senses for each word.
contrasting
train_1771
In logic, two negatives always cancel each other out.
in language this is only theoretically the case: she is not unhappy does not mean that she is happy; it means that she is not fully unhappy, but she is not happy either.
contrasting
train_1772
The proposed representation encodes P&G has sold coffee, but not to airlines.
it is not said that the buyers are likely to have been other kinds of companies.
contrasting
train_1773
CCG is one instantiation (Steedman, 2000), which is used by many semantic parsers, e.g., Zettlemoyer and Collins (2005).
the logical forms there can become quite complex, and in the context of program induction, this would lead to an unwieldy search space.
contrasting
train_1774
Our model is arc-factored, so we can sum over all DCS trees in Z L (x) using dynamic programming.
in order to learn, we need to sum over {z ∈ Z L (x) : z w = y}, and unfortunately, the additional constraint z w = y does not factorize.
contrasting
train_1775
Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).
supervised methods rely on labeled training data, which is time-consuming and expensive to generate.
contrasting
train_1776
The simplest way to expand this formulation for single-type graphs is to duplicate each predicate node, with one node for each order of the types, and then the ILP is unchanged.
this is inefficient as it results in an ILP with 2n(2n − 1) variables and 2n(2n−1)(2n−2) transitivity constraints.
contrasting
train_1777
This phrase table is traditionally generated by going through a pipeline of two steps, first generating word (or minimal phrase) alignments, then extracting a phrase table that is consistent with these alignments.
as DeNero and Klein (2010) note, this two step approach results in word alignments that are not optimal for the final task of generating phrase tables that are used in translation.
contrasting
train_1778
Estimating such grammars under a Maximum Likelihood criterion is known to be plagued by strong overfitting leading to degenerate estimates (DeNero et al., 2006).
our learning objective not only avoids overfitting the training data but, most importantly, learns joint stochastic synchronous grammars which directly aim at generalisation towards yet unseen instances.
contrasting
train_1779
In the RFE estimate, every rule NT P → α / β receives a probability in proportion with the times that α / β was covered by the NT label.
estimating the parameters under Maximum-Likelihood Estimation (MLE) for the latent translation structure model p(σ) is bound to overfit towards memorising whole sentence-pairs as discussed in (Mylonakis and Sima'an, 2010), with the resulting grammar estimate not being able to generalise past the training data.
contrasting
train_1780
On the other hand, estimating the parameters under Maximum-Likelihood Estimation (MLE) for the latent translation structure model p(σ) is bound to overfit towards memorising whole sentence-pairs as discussed in (Mylonakis and Sima'an, 2010), with the resulting grammar estimate not being able to generalise past the training data.
apart from overfitting towards long phrase-pairs, a grammar with millions of structural rules is also liable to overfit towards degenerate latent structures which, while fitting the training data well, have limited applicability to unseen sentences.
contrasting
train_1781
Earlier approaches for linguistic syntax-based translation such as (Yamada and Knight, 2001;Galley et al., 2006;Huang et al., 2006;Liu et al., 2006) focus on memorising and reusing parts of the structure of the source and/or target parse trees and constraining decoding by the input parse tree.
to this approach, we choose to employ linguistic annotations in the form of unambiguous synchronous span labels, while discovering ambiguous translation structure taking advantage of them.
contrasting
train_1782
Parser quality is usually evaluated by comparing its output to a gold standard whose annotations are linguistically motivated.
there are cases in which there is no linguistic consensus as to what the correct annotation is (Kübler et al., 2009).
contrasting
train_1783
Therefore, there is no single parameter which completely controls the edge direction, and hence there is no direct way to perform an edge-flip by parameter modification.
setting extreme values for the parameters controlling the direction of a certain edge type creates a strong preference towards one of the directions, and effectively determines the edge direction.
contrasting
train_1784
Its induced parent (w 3 ) is its gold child, and thus undirected evaluation does not consider it an error.
w 3 is assigned w 2 's gold parent, w 1 .
contrasting
train_1785
Much like undirected evaluation, NED will mark the attachment of w 2 as correct, since its induced parent is its gold child.
unlike undirected evaluation, w 3 's induced attachment will also be marked as correct, as its induced parent is its gold grandparent.
contrasting
train_1786
Thus, NED does not accept all possible annotations of structures of size 3.
using a method which accepts all possible annotations of structures of size 3 seems too permissive.
contrasting
train_1787
The binary branching nature of CCG means that it is naturally compatible with bottom-up parsing algorithms such as shift-reduce and CKY (Ades and Steedman, 1982;Steedman, 2000).
the parsing work by Clark and Curran (2007), and also Hockenmaier (2003) and Fowler and Penn (2010), has only considered chart-parsing.
contrasting
train_1788
A likely reason is that the recovery of longer-range dependencies requires more processing steps, increasing the chance of the correct structure being thrown off the beam.
the precision did not drop more quickly than C&C, and in fact is consistently higher than C&C across all dependency lengths, which reflects the fact that the long range dependencies our parser managed to recover are comparatively reliable.
contrasting
train_1789
The shiftreduce parser is linear-time (in both sentence length and beam size), and can analyse over 10 sentences per second on a 2GHz CPU, with a beam of 16, which compares very well with other constituency parsers.
this is no faster than the chart-based C&C parser, although speed comparisons are difficult because of implementation differences (C&C uses heavily engineered C++ with a focus on efficiency).
contrasting
train_1790
Here, the Berkeley parser (solid blue edges) incorrectly attaches from debt to the noun phrase $ 30 billion whereas the correct attachment (dashed gold edges) is to the verb raising.
there are a range of error types, as shown in Figure 2 attachment ambiguity where by yesterday afternoon should attach to had already, (b) is an NP-internal ambiguity where half a should attach to dozen and not to newspapers, and (c) is an adverb attachment ambiguity, where just should modify fine and not the verb 's.
contrasting
train_1791
We also find their surface marker / punc- tuation cues of : and , preceding the noun.
we additionally find other cues, most notably that if the N IN sequence occurs following a capitalized determiner, it tends to indicate a nominal attachment (in the n-gram, the preposition cannot attach leftward to anything else because of the beginning of the sentence).
contrasting
train_1792
We would like to use additional data to model the fluent part of spoken language.
the Switchboard corpus is one of the largest widelyavailable disfluency-annotated speech corpora.
contrasting
train_1793
These units typically represent the most frequent phoneme sequences in English words.
it isn't clear why these units would produce the best hybrid output.
contrasting
train_1794
This weight is the negative log probability of the resulting model after adding the corresponding features and priors.
the lexicon prior poses a problem for this construction since the penalty incurred by a new unit in the segmentation depends on whether that unit is present elsewhere in that segmentation.
contrasting
train_1795
Humans listening to speech with natural prosody are able to understand the content with low cognitive load and high accuracy.
most modern ASR systems only use an acous-tic model and a language model.
contrasting
train_1796
There are also many studies using prosodic information for various spoken language understanding tasks.
research using prosodic knowledge for speech recognition is still quite limited.
contrasting
train_1797
Considering that traditional statistical strategies would lead to recall dropping uniformly to 0, this is a vast improvement.
the symbolic baseline recalls 80% of occurrences in aggregate, indicating that we are not yet making optimal use of the dictionary for cases when a code is missing from the training data.
contrasting
train_1798
In a typical classroom assessment (e.g., an exam, assignment or quiz), an instructor or a grader provides students with feedback on their answers to questions related to the subject matter.
in certain scenarios, such as a number of sites worldwide with limited teacher availability, online learning environments, and individual or group study sessions done outside of class, an instructor may not be readily available.
contrasting
train_1799
When comparing scores using RMSE, the difference is less consistent, yielding an average improvement of 0.002.
for one measure (tf*idf), the improvement is 0.063 which brings its RMSE score close to the lowest of all BOW metrics.
contrasting