id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_15100
Table 1 shows a number of features extracted from the aligned dependency trees in Figure 1 and highlights that adjectives and nouns do not share many features if only first order dependencies would be considered.
through the inclusion of inverse and higher order dependency paths we can observe that the second order features of the adjective align with the first order features of the noun.
contrasting
train_15101
Due to offsetting one of the constituents, the composition operation is not commutative and hence avoids identical representations for house boat and boat house.
the typed nature of our vector space re- sults in extreme sparsity, for example while the untyped VSM has 130k dimensions, our APT model can have more than 3m dimensions.
contrasting
train_15102
The TC-LSTMs model the interactions at position (i, j) by merging the internal memory c i−1,j c i,j−1 and hidden state h i−1,j h i,j−1 along row and column dimensions.
with TC-LSTMs, LC-LSTMs firstly use two standard LSTMs in parallel, producing hidden states h 1 i,j and h 2 i,j along row and column dimensions respectively, which are then merged together flowing next step.
contrasting
train_15103
The intuition is that, when the ensemble is not certain of the correct answer, it should not assign a large cost to implausible attachments.
hamming cost would assign a cost of 1 (column labeled "hamming") in all incorrect cases.
contrasting
train_15104
layer is added on top of a BLSTM, which allows embeddings of previously predicted tags to propagate through and influence the pending tagging decision.
the language model layer is only effective when both scheduled sampling for training and beam search for inference are used.
contrasting
train_15105
It follows from the above assumption that the dependency between any two tokens in q are likely to be the same as the dependency between their corresponding tokens in s. This allows to create a treebank if we can project the dependency from sentences to queries.
since x and y may not be directly connected by a dependency edge in s, we need a method to derive the dependency between x, y ∈ q from the (indirect) dependency between x, y ∈ s. We propose such a method in Section 3.
contrasting
train_15106
First, we argue that if the dependency tree of s has a subtree that contains each token in q once and only once, then it is very likely that the subtree expresses the same semantics as the query.
if we cannot find such a subtree, it is an indication that we cannot derive reasonable dependency information from the sentence.
contrasting
train_15107
The head-modifier embeddings has about 3% advantage in UAS over randomized embeddings.
using pretrained word2vec embeddings, we also achieve 3% advantage.
contrasting
train_15108
Some recent work (Ganchev et al., 2012;Barr et al., 2008) investigated the problem of syntactic analysis for web queries.
current study is mostly at postag rather than dependency tree level.
contrasting
train_15109
Recaps not only help the audience absorb the essence of previous episodes, but also grab people's attention with upcoming plots.
creating those recaps for every newly aired episode is labor-intensive and time-consuming.
contrasting
train_15110
Our work is complementary to video description: the large number of unlabeled videos can be utilized to train end-toend recap extraction system when video description models can properly output textual descriptions.
with prior work, the main contributions of this paper are: (1) We propose a novel problem, text recap extraction for TV series.
contrasting
train_15111
Their tool AbstFinder is based on the idea that the significance of terms and concepts is related to the number of mentions in the text.
in some cases, there is only a weak correlation between the term frequencies and their relevance in documents.
contrasting
train_15112
Thus, we use the IOB encoding during the annotation step.
we want to discuss a drawback of this notation: When applying text classification approaches in information extraction tasks with IOB encoding, the number of classes reduplicates and this reduces the amount of training data per class.
contrasting
train_15113
Positive unlabeled learning (PU) (De Comite et al., 1999;Liu et al., 2002) may be then used for deceptive review spam detection (Hernandez et al., 2013).
this clearly contrasts with our problem, where our training data contains a complete labeled set of positive (spam) and negative (nonspam) reviews besides the unlabeled set of review data.
contrasting
train_15114
Second, sentences consist of heavily overlapping groups, that include words reappearing in one or more sentences.
as it appears on Table 4, the number of clusters in the clustering based regularizers is significantly smaller than that of the sentences -and definitely controlled by the designer -thus resulting in much faster computation.
contrasting
train_15115
For the DRRN, the state and comments are sent through two separate deep neural networks.
in our baselines, we do not explicitly model how values associated with each comment are combined to form the action value.
contrasting
train_15116
To this end, we need to improve the quantitative empirical understanding of such reuse accompanied by qualitative empirical studies.
only few such works exist.
contrasting
train_15117
On Semeval, which features larger pieces of text, our CRP technique improves on TRP, the best performing baseline, by more than 1 %, altough the difference is not statistically significant.
cRP is head and shoulders above main, with an absolute gain of 6%.
contrasting
train_15118
The performance of such systems is tightly bound to the quality of the underlying features.
it is laborious to manually design the most informative features for such a system.
contrasting
train_15119
Instead of taking the mean of the intermediate recurrent layer states h t , we could use the last state vector h M to compute the score and remove the mean-over-time layer.
as we will show in Section 5.2, it is much more effective to use the mean-over-time layer and take all recurrent states into account.
contrasting
train_15120
We normalize all gold-standard scores to [0, 1] and use them to train the network.
during testing, we rescale the output of the network to the original score range and use the rescaled scores to evaluate the system.
contrasting
train_15121
We obtain 0.4% and 1.0% improvement from CNN and LSTM ensembles, respectively.
our best model (row 7 in Table 2) is the ensemble of 10 instances of CNN models and 10 instances of LSTM models and outperforms the baseline BLRR system by 5.6%.
contrasting
train_15122
Interpreting neural network models and the interactions between nodes is not an easy task.
it is possible to gain an insight of a network by analyzing the behavior of particular nodes.
contrasting
train_15123
Such sentences are intended to have the same meaning or usage within a similar context but use different words or writing style.
even though non-uniform sentences tend to have similar wording, similar sentence pairs do not necessarily indicate a non-uniform language instance.
contrasting
train_15124
It is worth to mention that NLD is similar to Plagiarism Detection and Paraphrase Detection (PD) as all these tasks aim to capture similar sentences with the same meaning (Das and Smith, 2009).
the goal of authors in plagiarism and paraphrasing is to change as many words as possible to increase the differences between texts, whereas in technical writing, the authors try to avoid such differences, but they do not always succeed and thus NLD solutions are needed.
contrasting
train_15125
(3) Their method requires retraining of the model on complete training data in order to adapt to each domain.
our method can adapt a single general model to different domains using small in-domain data, leading to a considerable reduction in training time.
contrasting
train_15126
We see that having no regularization (λ = 0) fails on all three L1s, due to overfitting on the smaller in-domain data.
the effect of varying regularization is more significant on L1 Russian and Spanish, as the general NNJM has seen much smaller in-domain L1: Chinese L1: Russian L1: Spanish data compared to L1 Chinese.
contrasting
train_15127
The high correlation of character-level system and lexical similarity explains why character-level translation performs nearly as well other methods for language pairs which have high lexical similarity, but performs badly otherwise.
the OSlevel representation has lesser correlation with lexical similarity and sits somewhere between characterlevel and word/morpheme level systems.
contrasting
train_15128
Their model used a regular expressionspecific semantic unification technique to disambiguate the meaning of the natural language descriptions.
our method is similar in that we require only description and regex pairs to learn.r we treat the problem as a direct translation task without applying any domain-specific knowledge.
contrasting
train_15129
DocId: 17036, DCT: 2010-07-17, Sentence: 6,15: Rather than recall the devices or offer a hardware fix , Jobs Steve Jobs said yesterday 2010-07-16 that Apple will offer a free case to anyone who has purchased an iPhone 4 iPhone 4 .
[...] ,Jobs Steve Jobs admitted that the percentage of calls dropped on the iPhone 4 iPhone 4 was slightly greater than the percentage of calls dropped on the 3GS iPhone 3GS .
contrasting
train_15130
The authors use the inverse ranking of the average precision previously achieved by individual systems as the weights in their algorithm.
since we use this approach for systems with no historical performance data, we use uniform weights across all unsupervised systems for both the tasks.
contrasting
train_15131
(2015) for incorporating language models trained on monolingual corpora for machine translation.
our approach differs in two 1 The LM was trained to achieve a perplexity of 120 Figure 2: Illustration of our late and deep fusion approaches to integrate an independently trained LM to aid video captioning.
contrasting
train_15132
During training, the model learns to embed "one-hot" words into a lower 500d space by applying a linear transformation.
the embedding is learned only from the limited and possibly noisy text in the caption data.
contrasting
train_15133
We utilize similar approaches (late fusion, deep fusion) to train an LSTM for translating video to text that exploits large monolingual-English corpora (Wikipedia, BNC, UkWac) to improve RNN based video description networks.
unlike Gulcehre et al.
contrasting
train_15134
The key advantage of the CRF framework is the flexibility to consider arbitrary features of the input, as well as enough features over the output structure to encourage it to be well-formed and consistent.
inference in CRFs is fast only if the features over the output structure are limited.
contrasting
train_15135
Conventional training seeks to minimise the difference between y true and y pred .
in order to make our model robust against noise, we also want to minimize the variation of the output when noise is applied to the input.
contrasting
train_15136
Overall, the gating values are small.
this does not mean the model does not utilize the character-level inputs.
contrasting
train_15137
In particular, it has been proven that Inversion Transduction Grammar (ITG) (Wu, 1997), which captures structural coherence between parallel sentences, helps in word alignment (Zhang and Gildea, 2004;Zhang and Gildea, 2005).
iTG has not been introduced into an agreement constraint so far.
contrasting
train_15138
In KFTT and BTEC Corpus, itg achieved significant improvement against sym and none on IBM Model 4 (p ≤ 0.05) 9 .
in the Hansard Corpus, itg shows no improvement against sym.
contrasting
train_15139
In KFTT, itg is comparable to sym on IBM Model 4 in machine translation; however, itg achieved significant improvement in terms of word alignment, which follows the previous reports that better word alignment does not always result in better translation Yang et al., 2013).
in BTEC, itg outperforms sym both on word alignment and machine translation.
contrasting
train_15140
Neural network based approaches usually take Chinese SRL as sequence labeling task and use bidirectional recurrent neural network (RNN) with long-short-term memory (LST-M) to solve the problem (Wang et al., 2015).
both of the above two kinds of approaches identify each candidate argument separately without considering the relationship between arguments.
contrasting
train_15141
Recent work has also applied neural network based models to sentiment analysis tasks (Chen et al., 2011;Socher et al., 2013;Moraes et al., 2013;Dong et al., 2014;dos Santos and Gatti, 2014;Kalchbrenner et al., 2014).
none of the above methods focused on visualizing and understanding the inner workings of these successful neural networks.
contrasting
train_15142
For example, while males are less likely to , worry, suffer, fear, bother, ache, mourn, anger .11 scare, annoy, confuse, depress, upset, disappoint .11 1st person singular direct object kill, murder, slay, slaughter, butcher .09 scare, annoy, confuse, depress, upset, disappoint .07 use first-person singular pronouns overall, they are much more likely to use them as the subject of aggressive physical contact verbs like "kick", "shoot", "slap", and "smash", suggesting men are more likely to express themselves as agents of aggressive contact.
women use first-person singulars in the social sphere, particularly in an affiliative context.
contrasting
train_15143
Ideal point models are often created using Bayesian techniques over large amounts of roll-call data (Clinton et al., 2004;Jackman, 2001).
these models are not used to make predictions.
contrasting
train_15144
Natural language understanding is the core of the human computer interactions.
building new domains and tasks that need a separate set of models is a bottleneck for scaling to a large number of domains and experiences.
contrasting
train_15145
There has been work dealing with text-to-AMR parsing (Flanigan et al., 2014;Wang et al., 2015;Peng et al., 2015;Vanderwende et al., 2015;Pust et al., 2015;Artzi et al., 2015).
relatively little work has been done on AMR-to-text generation.
contrasting
train_15146
Given a non-directed graph G N with n cities, supposing that there is a traveling cost between each pair of cities, TSP tries to find a tour of the minimal total cost visiting each city exactly once.
the asymmetric traveling salesman problem (ATSP) tries to find a tour of the minimal total cost on a directed graph, where the traveling costs between two nodes are different in each direction.
contrasting
train_15147
(2009) introduces a method for incorporating a trigram language model.
as a result, the number of nodes in the AGTSP graph grows exponentially.
contrasting
train_15148
Thus, to have a better prediction accuracies for these variables information from other sources are necessary which is beyond the scope of this paper.
the present FOMC meeting might give an indication to future FOMC plans and thus, to the treasury rates.
contrasting
train_15149
Our RNN model, once trained, will be able to map any new ingredient and product category (their text descriptions) into vectors, and make a binary prediction of whether the two go together.
training the model takes considerable time and cannot be easily adjusted in the face of new evidence, e.g., a few positive and negative categories for a previously unseen ingredient.
contrasting
train_15150
The ER metric, which was derived from counts of errors detected using e-rater R , is nearly as good as the state-of-the-art RBM (GLEU) and the interpolation of these metrics has the strongest reported correlation with the human ranking (ρ = 0.885, r = 0.867).
since e-rater R is not widely available to researchers, we also tested a metric using Language Tool, which does nearly as well when interpolated with GLEU (ρ = 0.857, r = 0.867).
contrasting
train_15151
This avoids the need to collect supervised labels on a large scale, which can be prohibitively expensive.
automatically evaluating the quality of these models remains an open question.
contrasting
train_15152
These metrics have recently been adopted by dialogue researchers (Ritter et al., 2011;Li et al., 2015;Galley et al., 2015b;Wen et al., 2015;Li et al., 2016).
these metrics assume that valid responses have significant word overlap with the ground truth responses.
contrasting
train_15153
Such systems can be evaluated using recall or precision metrics.
when deployed in a real setting these models will not have access to the correct response given an unseen conversation.
contrasting
train_15154
In single-round conversation, a system is expected to encode only one utterance as a context (Ritter et al., 2011;Wang et al., 2013).
in multi-turn conversation, a system is expected to encode multiple utterances (Shang et al., 2015;Lowe et al., 2015).
contrasting
train_15155
For the categorical slot filling approach, various algorithms that directly model the distribution of slot values have been proposed, including generative models (Williams, 2010), maximum entropy linear classifiers (Metallinou et al., 2013), and neural networks (Ren et al., 2014).
none of these models are applicable for predicting a variable that ranges over an infinite set, and it is not straightforward to extend them suitably.
contrasting
train_15156
Many works (Lowe et al., 2015;Serban et al., 2015a;Dodge et al., 2016) have investigated conversation datasets for developing chat bot or QA-like general purpose conversation agents.
collecting data to develop goal oriented dialogue systems that can help users to complete a task in a specific domain remains difficult.
contrasting
train_15157
At each generation step, the target is labelled with 0 if that delexicalised token has been generated; otherwise it is set to 1.
for the models without attention, the targets per turn are set to the same because the condition vector will not be able to learn the dynamically changing behaviour without attention.
contrasting
train_15158
Lastly, the results suggested that by making a complex system more interpretable at different levels not only helps our understanding but also leads to the highest success rates.
there is still much work left to do.
contrasting
train_15159
Taken together, it is remarkable that the model identified these patterns using only distributional vectors and only the positive/negative example pairs.
the reader should note these are not true Hearst patterns: Hearst patterns explicitly relate a hypernym and hyponym using an exact pattern match of a single co-occurrence.
contrasting
train_15160
Out of these DEFIE is the cleanest resource.
the equivalence classes tend to be small, prioritizing precision over recall.
contrasting
train_15161
However, the equivalence classes tend to be small, prioritizing precision over recall.
ppDB (Ganitkevitch et al., 2013) offers the largest repository of paraphrases.
contrasting
train_15162
The other systems did not report that measure.
they performed the evaluation of subsumption, entailment or hypernymy relationships which are related to synonymy.
contrasting
train_15163
The importance of this intermediate representation led to the creation of the interpretable semantic textual similarity task (Agirre et al., 2015a) that focuses on predicting chunk-level alignments and similarities.
while extensive resources exist for sentence-level relationships, human annotated chunk-aligned data is comparatively smaller.
contrasting
train_15164
From the row D A + D S in Table 2(a), we observe that the typed score F1 is slightly behind that of rank 1 system while all other three metrics are significantly better, indicating that we need to improve our modeling of the intersection of the three aspects.
this does not apply to images dataset where the improvement on the typed score F1 comes from the typed alignment.
contrasting
train_15165
For predicting sentence similarities, in both variants of the task, word or chunk alignments have extensively been used (Sultan et al., 2015;Sultan et al., 2014a;Sultan et al., 2014b;Hänig et al., 2015;Karumuri et al., 2015;Agirre et al., 2015b;Banjade et al., 2015, and others).
to these systems, we proposed a model that is trained jointly to predict alignments, chunk similarities and sentence similarities.
contrasting
train_15166
Intuitively, we can use two separate uni-directional RNNs where each one constructs its respective attended encoder context vectors for computing RNN hidden states.
the drawback of this approach is that the decoder would often produce different alignments resulting in discrepancies for the forward and backward directions.
contrasting
train_15167
On the other hand, by recursively feeding both the query vector and the soft headword embedding into the RNN, the model implicitly captures high-order parsing history information, which can potentially improve the parsing accuracy (Yamada and Matsumoto, 2003;McDonald and Pereira, 2006).
for a graph-based dependency parser, utilizing parsing history features is computationally expensive.
contrasting
train_15168
The model is extended to integrate parsing and tagging in .
develop the stack LSTM architecture, which uses three LSTMs to respectively model the sequences of buffer states, stack states, and actions.
contrasting
train_15169
The way we query the memory component and obtain the soft headword embeddings is essentially the attention mechanism.
different from the above studies where the alignment information is latent, in dependency parsing, the arc between the modifier and headword is known during training.
contrasting
train_15170
We observe that cases in which the parsers disagree are of substantially worse quality compared to humanbased annotations.
in cases of agreement between the parsers, the resulting gold standards do not exhibit a clear disadvantage relative to the Human Gold annotations.
contrasting
train_15171
Agreement estimates in NLP are often obtained in annotation setups where both annotators edit the same automatically generated input.
in such experimental conditions, anchoring can introduce cases of spurious disagreement as well as spurious agreement between annotators due to alignment of one or both participants towards the given input.
contrasting
train_15172
Several estimates of expert inter-annotator agreement for English parsing were previously reported.
most such evaluations were conducted us-ing annotation setups that can be affected by an anchoring bias (Carroll et al., 1999;Rambow et al., 2002;.
contrasting
train_15173
Second, machine comprehension systems cannot exploit the large level of redundancy present on the web to find statements that provide a strong syntactic match to the question (Yang et al., 2015).
a machine comprehension system must use the single phrasing in the given passage, which may be a poor syntactic match to the question.
contrasting
train_15174
The neural counterpart to alignment, attention (Bahdanau et al., 2015), which is a key part of our approach, was originally proposed and has been predominantly used in conjunction with LSTMs (Rocktäschel et al., 2016;Wang and Jiang, 2016) and to a lesser extent with CNNs (Yin et al., 2016).
our use of attention is purely based on word embeddings and our method essentially consists of feed-forward networks that operate largely independently of word order.
contrasting
train_15175
Coreference resolution systems typically operate by making sequences of local decisions (e.g., adding a coreference link between two mentions).
most measures of coreference resolution performance do not decompose over local decisions, which means the utility of a particular decision is not known until all other decisions have been made.
contrasting
train_15176
To the best of our knowledge reinforcement learning has not been applied to coreference resolution before.
imitation learning algorithms such as SEARN (Daumé III et al., 2009) have been used to train coreference resolvers (Daumé III, 2006;Ma et al., 2014;Clark and Manning, 2015).
contrasting
train_15177
In this case, the source word " " and target word "park" work as the non-terminals.
it is problematic when we need to adjoin a subtree which is not presented in training sentences, which we call floating subtree in this paper.
contrasting
train_15178
Flexible non-terminals are powerful to handle floating subtrees and it achieve better translation quality.
the computational cost of decoding becomes high even though they are compactly represented in the lattice form (Cromieres and Kurohashi, 2014).
contrasting
train_15179
Flexible nonterminals increase the number of translation rules because the insertion positions are selected during the decoding.
we think it is possible to restrict possible insertion positions or even select only one insertion position by looking at the tree structures on both sides.
contrasting
train_15180
So the entire network records length very accurately.
unlike in the toy problem, no single unit tracks length perfectly.
contrasting
train_15181
The attention model plays a crucial role in NMT, as it shows which source word(s) the model should focus on in order to predict the next target word.
the attention or alignment quality of NMT is still very low (Mi et al., 2016a;Tu et al., 2016).
contrasting
train_15182
The attentions, α t,1 ...α t,l , in each step t play an important role in NMT.
the accuracy is still far behind the traditional MaxEnt alignment model in terms of alignment F1 score (Mi et al., 2016b;Tu et al., 2016).
contrasting
train_15183
Callison-Burch (2009) explored the use of crowdsourcing platforms for evaluating MT quality, with good results.
we are not aware of any research that investigates the effect of improved MT on human behavior.
contrasting
train_15184
These multilingual interactions facilitated by MT, such as reading nonnative listing descriptions or conversing with a foreign seller, are integral to the user experience.
due to the unique nature of the products available in the marketplace, a generic third party MT system 1 often falls short when translating usergenerated content.
contrasting
train_15185
Neither System 1 nor System 2 showed a significant difference between selection of translations provided by the trained or untrained system: chi-squared tests did not detect a significant difference between number of responses favoring the trained system and number of responses favoring the generic system (p = 0.1948 and p = 0.26, respectively, for the two systems).
a chi-squared test indicated a significant preference for System 3, which was chosen 35% more often than the generic system (p = 0.0048).
contrasting
train_15186
Arguably, (a) is the only correct parse tree for untestably; the reading associated with (b) is hard to get.
unlockable is truly ambiguous between "able to be unlocked" (c) and "unable to be locked" (d).
contrasting
train_15187
Manual or automatic transcription at the word level is typically not possible because of the absence of an orthography or prior lexicon, and though manual phonemic transcription is possible, it is prohibitively slow.
translations of the minority language into a major language are more easily acquired.
contrasting
train_15188
We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).
human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research.
contrasting
train_15189
SQuAD contains 107,785 question-answer pairs on 536 articles, and is almost two orders of magnitude larger than previous manually labeled RC datasets such as MCTest (Richardson et al., 2013).
to prior datasets, SQuAD does not provide a list of answer choices for each question.
contrasting
train_15190
The model is challenged more on other named entities (i.e., location, person and other entities) because there are many more plausible candidates.
named entities are still relatively easy to identify by their POS tag features.
contrasting
train_15191
Due to difficulties caused by the non-homographic nature of phrase correspondences, the units of correspondence in previous studies are defined as sequences of words like in (Yao et al., 2013) and not syntactic phrases.
syntactic structures are important in modeling sentences, e.g., their sentiments and semantic similarities (Socher et al., 2013;Tai et al., 2015).
contrasting
train_15192
Similarly, Choe and McClosky (2015) used dependency forests of paraphrasal sentence pairs and allowed disagreements to some extent.
alignment quality was beyond their scope.
contrasting
train_15193
The same aligned pair can have more than one support of descendant alignments because there are numerous descendant node combinations.
the Monotonous and the Maximum Set conditions allow ∆ L to be further restricted so that each of aligned pairs in H L has only one support.
contrasting
train_15194
(2008, 2011 formally prove that sequences of steps in the edgefactored GBDP (Eisner, 1996) can be used to emulate any individual step in the arc-hybrid system (Yamada and Matsumoto, 2003) and the Eisner and Satta (1999, Figure 1d) version.
they did not draw an explicitly direct connection between Eisner and Satta (1999) and TBDPs.
contrasting
train_15195
On the one hand, it is highly expressive to cover a majority of semantic analysis.
the corresponding Maximum Subgraph problem with an arc-factored disambiguation model can be solved in low-degree polynomial time.
contrasting
train_15196
(2014b) showed that even a small amount of supervised data can boost the end-to-end F 1 score by 3.9% on the TAC KBP tasks.
existing relation extraction datasets such as the SemEval-2010 Task 8 dataset (Hendrickx et al., 2009) and the Automatic Content Extraction (ACE) (Strassel et al., 2008) dataset are less useful for this purpose.
contrasting
train_15197
To alleviate such drawback, attempts have been made to build relation extractors with a small set of seed instances or humancrafted patterns (Nakashole et al., 2011;Carlson et al., 2010), based on which more patterns and instances will be iteratively generated by bootstrap learning.
these methods often suffer from semantic drift (Mintz et al., 2009).
contrasting
train_15198
represent relation mentions is to concatenate or average its text feature embeddings.
text features embedding may be in a different semantic space with relation types.
contrasting
train_15199
From the comparison, it shows that NL strategy yields better performance than TD strategy, since the true labels inferred by Investment are actually wrong for many instances.
as discussed in Sec.
contrasting