id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5800
(2018) created graded metaphoricity layers for VUAMC using crowdsourcing, with the former approach labelling grammatical constructions and the latter labelling tokens.
manually labelling larger amounts of data is costly, even with crowdsourcing.
contrasting
train_5801
The differences caused by tied BWS scores do not indicate errors but show a small difference due to the nature of BWS and GPPL scores.
for metaphor novelty, the difference when tied scores are removed is negligible.
contrasting
train_5802
English is undoubtedly the most prominent language with multiple resources and tools.
english language processing is only a part of NLP in general.
contrasting
train_5803
The authors propose several probing tasks and divide them into those pertaining to surface, syntactic and semantic phenomena.
we decide to discard the 'syntactic versus semantic' distinction and consider all tasks either surface (see Section 3.1) or compositional (see Section 3.2).
contrasting
train_5804
Ideally, we would like to have linearly separable spaces with respect to S-classes -presumably embeddings from which information can be effectively extracted by such a simple mechanism are better.
this might not be the case considering the complexity of the space: non-linear models may detect S-classes more accurately.
contrasting
train_5805
This can be as straightforward as simply having a feature engineered baseline, as with our named entity recognition example (Section 4.2).
it can also be as nuanced as having access to an analysis of the types of required knowledge to accurately predict certain instances in the data, as in our recognizing textual entailment example where we use an alternate validation set for the MultiNLI corpus where subsets have been earmarked as of interest for specific kinds of task and linguistic knowledge (Section 4.1).
contrasting
train_5806
If the knowledge brought to the process is only partial, then only partial understanding of network function will be possible.
as one iterates through the interpretation process, the potential relevance of additional knowledge may emerge, and the process can be repeated with the expanded set.
contrasting
train_5807
The pathways were created by analyzing which neurons behave cohesively, indicating a subprocess within the network.
these subprocesses do not correspond strongly to any of the tested features.
contrasting
train_5808
For instance, in English or Norwegian we take [a] nap, whereas in Spanish we throw it, and in French, Catalan, German and Italian we make it.
they are compositionally less rigid than some other types of multiword expressions such as, e.g., idioms (as, e.g., [to] kick the bucket) or multiword lexical units (as, e.g., President of the United States or chief inspector).
contrasting
train_5809
We found that, despite this operation being the go-to representation for lexical relation modeling, concatenation works as well or better, and clear improvements can be obtained by incorporating explicitly learned relation vectors.
even with these improvements, categorizing LFs proves to be a difficult task.
contrasting
train_5810
The reason is that the quality of {Ŝ m H )L , T HE } is naturally restricted by the {Ŝ w H )L , T HE }, whose quality is in turn restricted by the induced dictionary.
by combining the augmented datasets from these two methods, we consistently improve the translation performance over using only word substitution augmentation (compare Table 2 rows 7 and 9).
contrasting
train_5811
Of course, we do not have enough s-t parallel data to obtain a good estimate of the true backtranslation distribution P s (X|y) (otherwise, we can simply use that data to learn θ).
we posit that even a small amount of data is sufficient to construct an adequate data selection policy Q(X|y) to sample the sentences x from multilingual data for training.
contrasting
train_5812
Automatic de-identification classifiers can significantly speed up the sanitization process.
obtaining a large and diverse dataset to train such a classifier that works well across many types of medical text poses a challenge as privacy laws prohibit the sharing of raw medical records.
contrasting
train_5813
In addition to structured medical data, EHRs contain free-text patient notes that are a rich source of information (Jensen et al., 2012).
due to privacy and data protection laws, medical records can only be shared and used for research if they are sanitized to not include information potentially identifying patients.
contrasting
train_5814
For values of N up to 1 000, our FastText and GloVe models are able to learn representations that allow training de-identification models that reach or exceed the target F 1 score of 95%.
training becomes unstable for N > 500: at this point, the adversary is able to break the representation privacy when trained for an additional 50 epochs (Fig.
contrasting
train_5815
FastText's approach to embedding unknown words (word embeddings are the sum of their subword embeddings) should intuitively prove useful on datasets with misspellings and ungrammatical text.
when using the additional casing feature, FastText beats GloVe only by 0.05 percentage points on the i2b2 test set.
contrasting
train_5816
Using the adversarially learned representation, de-identification models reach an F 1 score of 97.4%, which is close to the non-private system (97.67%).
the automatic pseudonymization approach only reaches an F 1 score of 95.0% at N = 500.
contrasting
train_5817
Named entity recognition (NER) is one of the best studied tasks in natural language processing.
most approaches are not capable of handling nested structures which are common in many applications.
contrasting
train_5818
If this is the case, then for the token "United", R and D will hold the embedding of/directions to the tokens "The" and "Kingdom" in their disaggregated (unmerged) form.
for the token "government", R and D will hold embeddings of/ directions to the combined entity "the United Kingdom" in each of the three slots for "The", "United" and "Kingdom".
contrasting
train_5819
The DL model improves dramatically as the data size increases and achieves the best performance among the 7 models when 7000 training examples are available.
the other models suffer much less from data scarcity with an exception of Random Forest.
contrasting
train_5820
Deep active learning (DAL) also outperforms SVM and yields comparable performance to DTAL.
the standard deviations in performance of DAL are substantially higher than those of DTAL (e.g.
contrasting
train_5821
F 1 score, the harmonic mean of precision and recall, is often used to select/evaluate the best models.
when precision needs to be prioritized over recall, a state-of-the-art model might not be the best choice.
contrasting
train_5822
Trade-offs between precision and recall have been well researched for classification (Joachims, 2005;Jansche, 2005;Cortes and Mohri, 2004).
barring studies on inference-time heuristics, there is limited work on training precision-oriented sequence tagging models.
contrasting
train_5823
(Pentina et al., 2015;Oh et al., 2015;Gong et al., 2016;Zhang et al., 2017)).
while in CL the prediction task is fixed but the trained algorithm is exposed to increasingly more complex training examples in subsequent stages, in TRL the algorithm is trained to solve increasingly more complex tasks in subsequent stages, but the training data is kept fixed across the stages.
contrasting
train_5824
Moreover, even the min values of RF2 consistently outperform the NoDA model (where a classifier is trained on the source domain and applied to the target domain without domain adaptation; bottom line of Table 1) and the min values of BasicTRL outperform NoDA in 5 of 6 setups (average difference of 3.9% for RF2 and for 3.5% for BasicTRL).
the min value of NoTRL is outperformed by NoDA in 5 of 6 cases (with an averaged gap of 2.8%).
contrasting
train_5825
To align the loss function with a perplexity measure we propose a simple refinement step, where we use the expected counts computed by the learned PNFA as features of a log-linear model, and learn interpolation weights.
to Equation 2, which uses the longest context x of length n to compute the conditional probability, the interpolated model leverages the ability of the PNFA to model substring expectations of all lengths up to n. This is similar to classic interpolation of language models (Rosenfeld, 1994;Chen, 2009).
contrasting
train_5826
Building a single model that jointly learns to perform many tasks effectively has been a longstanding challenge in Natural Language Processing (NLP).
multi-task NLP remains difficult for many applications, with multi-task models often performing worse than their single-task counterparts (Plank and Alonso, 2017;Bingel and Søgaard, 2017;McCann et al., 2018).
contrasting
train_5827
(2019) distill singlelanguage-pair machine translation systems into a many-language system.
they focus on multilingual rather than multi-task learning, use a more complex training procedure, and only experiment with Single→Multi distillation.
contrasting
train_5828
or formality (Fan et al., 2017;Hu et al., 2017;Ficler and Goldberg, 2017;Shen et al., 2017;Herzig et al., 2017;Fu et al., 2018;Rao and Tetreault, 2018).
human language actually involves a constellation of interacting aspects of style, and NNLG models should be able to jointly control these multiple interacting aspects.
contrasting
train_5829
Temp thinks white is going to win because white have the advantage of one more pawn.
temp cannot predict that white will lose the advantage in the next move.
contrasting
train_5830
Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge.
traditional language models are only capable of remembering facts seen at training time, and often have difficulty recalling them.
contrasting
train_5831
We also employ the same regularization strategy (DropConnect (Wan et al., 2013) + Dropout (Srivastava et al., 2014)) and weight tying approach.
we perform optimization using Adam (Kingma and Ba, 2015) with learning rate 1e-3 instead of NT-ASGD, having found that it is more stable.
contrasting
train_5832
Although, learning a latent representation will make the model more interpretable and easy to manipulate, the model which is assumed a fixed size latent representation cannot utilize the information from the source sentence anymore.
there are also some approaches without manipulating latent representation are proposed recently.
contrasting
train_5833
They employ Denoising Auto-encoders (Vincent et al., 2008) and back-translation (Sennrich et al., 2016) to build a translation style between different styles.
both lines of the previous models make few attempts to utilize the attention mechanism to refer the long-term history or the source sentence, except Lample et al.
contrasting
train_5834
(6)), our model failed to learn a meaningful output and only learned to generate a single word for any combination of input sentence and style.
when we don't use cycle reconstruction loss (Eq.
contrasting
train_5835
However, if we only use the real sentence the model will lose a significant part of the ability to control the style of the generated sentence, and thus yields a bad performance in style accuracy.
the model can still perform a style control far better than the input copy model discussed in the previous part.
contrasting
train_5836
(2018b) is the only work on controlling the sentiment for story ending generation.
their work needs manually label the story dataset with sentiment labels (happy, sad, unknown), which is time-consuming and laborintensive.
contrasting
train_5837
Recently, many approaches are proposed to generate a better story in terms of coherence (Jain et al., 2017;, rationality , topic-consistence (Yao et al., 2018a).
most of story generation methods lack the ability to receive guidance from users to achieve a specific goal.
contrasting
train_5838
The model described above does not explicitly cater to the structure of the narration of recipes in the generation process.
we know that procedural text has a high level structure that carries a skeleton of the narrative.
contrasting
train_5839
Too low thresholds cause a large amount of constraints, which adversely affect paraphrase quality.
with a high threshold, the proposed method can achieve high performance stably.
contrasting
train_5840
Sequence-to-sequence (seq2seq) models have achieved tremendous success in text generation tasks.
there is no guarantee that they can always generate sentences without grammatical errors.
contrasting
train_5841
We also conduct experiments on the WMT17 Automatic Post-Editing (APE) task.
we observe a large number of grammatical errors in the references which make the automatic evaluation less reliable.
contrasting
train_5842
For most sentences edited by GEC, their grammaticality is improved; while the bad cases are only in a small proportion (≤10%) in all the six tasks.
the task-oriented improvements vary across the tasks.
contrasting
train_5843
The two questions in a question pair in the Quora dataset are typically very similar in meaning.
the WikiAnswers paraphrase corpus tends to be noisier but one source question is paired with multiple target questions.
contrasting
train_5844
However, both stages are largely isolated in the status quo and, hence, information from the two phases is never properly fused.
this work proposes RankQA 1 : RankQA extends the conventional two-stage process in neural QA with a third stage that performs an additional answer reranking.
contrasting
train_5845
Adaptive retrieval, like many other recent advancements (e. g., Lee et al., 2018;Lin et al., 2018), limits the amount of information that flows between the information retrieval and machine comprehension modules in order to select better answers.
rankQA does not limit the information, but directly re-ranks the answers to remove noisy candidates.
contrasting
train_5846
The Curat-edTREC or WebQuestions datasets, for instance, are highly sensitive to some information retrieval features.
in all cases, the fused combination of features from both information retrieval and machine comprehension is crucial for obtaining a strong performance.
contrasting
train_5847
Earlier systems of this type comprise various modules such as, for example, query reformulation (e. g., Brill et al., 2002), question classification (Li and Roth, 2006), passage retrieval (e. g., Harabagiu et al., 2000), or answer extraction (Shen and Klakow, 2006).
the aforementioned modules have been reduced to two consecutive steps with the advent of neural QA.
contrasting
train_5848
(2018) re-rank the paragraphs before forwarding them to machine comprehension.
none of the listed works introduce answer reranking to neural QA.
contrasting
train_5849
Inference & Learning Challenges The model described above is conceptually simple.
inference and learning are challenging since 1 open evidence corpus presents an enormous search space (over 13 million evidence blocks), and (2) how to navigate this space is entirely latent, so standard teacher-forcing approaches do not apply.
contrasting
train_5850
We find that ORQA is more robust at separating semantically distinct text with high lexical overlap, as shown in the first three examples.
it is expected that there are limits to how much information can be compressed into 128-dimensional vectors.
contrasting
train_5851
In multi-hop reading comprehension, a system answers a question over a collection of paragraphs by combining evidence from multiple paragraphs.
to single-hop reading comprehension, in which a system can obtain good performance using a single sentence (Min et al., 2018), multi-hop reading comprehension typically requires more complex reasoning over how two pieces of evidence relate to each other.
contrasting
train_5852
and "Which animal races annually for a national title as part of ANS?".
in the given set of paragraphs, there are multiple games that can be the answer to the first sub-question.
contrasting
train_5853
the SimpleQuestion dataset, 99% of the relations in the test set also exist in the training data, which means their embeddings could be learned well during training.
for those relations which are never seen in the training data (called unseen relations), their embeddings have never been trained since initialization.
contrasting
train_5854
Each sample in SQ includes a human annotated question and the corresponding knowledge triple.
the distribution of the relations in the test set is unbalanced.
contrasting
train_5855
In this paper, we use knowledge graph embedding as a bridge between seen and unseen relations, which shares the same spirit with previous work.
less study has been done in relation detection.
contrasting
train_5856
To address this, recent studies propose to build entity graphs from input paragraphs and apply graph neural networks (GNNs) to aggregate the information through entity graphs (Dhingra et al., 2018;De Cao et al., 2018;Song et al., 2018a).
all of the existing work apply GNNs based on a static global entity graph of each QA pair, which can be considered as performing implicit reasoning.
contrasting
train_5857
WikiHop) usually aggregates document information to an entity graph, and answers are then directly selected on entities of the entity graph.
in a more realistic setting, the answers may even not reside in entities of the extracted entity graph.
contrasting
train_5858
Many IR techniques can be applied to answer single-hop questions (Rajpurkar et al., 2016).
these IR techniques are hardly introduced in multi-hop QA, since a single fact can only partially match a question.
contrasting
train_5859
Using Tok2Ent and dynamic graph attention, we realize a reasoning step at the entity level.
the unrestricted answer still cannot be backtraced.
contrasting
train_5860
Rule-based models are attractive for various tasks because they inherently lead to interpretable and explainable decisions and can easily incorporate prior knowledge.
such systems are difficult to apply to problems involving natural language, due to its linguistic variability.
contrasting
train_5861
Moreover, such models tend to require large amounts of training data to generalise correctly, and incorporating background knowledge is still an open problem (Rocktäschel et al., 2015;Weissenborn et al., 2017a;Rocktäschel and Riedel, 2017;Evans and Grefenstette, 2017).
rule-based models are easily interpretable, naturally produce explanations for their decisions, and can generalise from smaller quantities of data.
contrasting
train_5862
In contrast, rule-based models are easily interpretable, naturally produce explanations for their decisions, and can generalise from smaller quantities of data.
these methods are not robust to noise and can hardly be applied to domains where data is ambiguous, such as vision and language (Moldovan et al., 2003;Rocktäschel and Riedel, 2017;Evans and Grefenstette, 2017).
contrasting
train_5863
Datalog (Gallaire and Minker, 1978) programs where all function symbols have arity zero and are called entities and, similarly to related work (Sessa, 2002;Julián-Iranzo et al., 2009), we disregard negation and disjunction.
in principle NLPROLOG also supports functions with higher arity.
contrasting
train_5864
The worst case complexity vanilla logic programming is exponential in the depth of the proof (Russell and Norvig, 2010a).
in our case, this is a particular problem because weak unification requires the prover to attempt unification between all entity and predicate symbols.
contrasting
train_5865
Compared to statistical machine learning-based methods (Kushman et al., 2014;Mitra and Baral, 2016;Roy and Roth, 2018;Zhou et al., 2015;Huang et al., 2016) and semantic parsing-based methods (Shi et al., 2015;Koncel-Kedziorski et al., 2015;Roy and Roth, 2015;Huang et al., 2017), these methods do not need hand-crafted features and achieve high performance on large datasets.
they lack in capturing the specific MWPs features, which are an evidently vital component in solving MWP.
contrasting
train_5866
The results in Table 2 are quite positive: workers almost always provide the correct answer a to our implication question q when given only the original (q, a) pair and no additional context, which indicates the implication is valid.
workers under-predict and are inaccurate for the control questions, which is expected since there is no necessary logical connection between (q, a) and the control question.
contrasting
train_5867
Such attention weights are not readily available for all NMT models.
our method can be used to fine-tune arbitrary NMT models to reduce word omission errors in only hundreds of steps.
contrasting
train_5868
Results A logistic regression model is fit using all the features, and the resulting coefficients, shown in Figure 1, provide support for all five hypotheses.
the effect sizes of each hypotheses variables differed substantially, pointing to the complexity of code switching behavior.
contrasting
train_5869
Notably, we find that topical modulation has the largest effect on switching to Naijá, with use of emotion surpassing the effect for a few topics.
as no one factor was sufficient for predicting code switching, our results point to the need for holistically modeling the social context when examining factors influence code-switching behavior.
contrasting
train_5870
Compared to degree centrality, PageRank can in theory be better since the global graph structure is considered.
we only observed marginal differences in our experiments (see Sections 4 and 5 for details).
contrasting
train_5871
We adopted the training, validation and test splits (589,284/32,736/32,739) widely used for evaluating abstractive summarization systems.
as noted in Durrett et al.
contrasting
train_5872
Training takes 1 minute for a model with cross-entropy loss and 30 minutes for a model with joint loss on an NVidia V100 GPU.
reducing the step size in IG for calculating Riemann approximation of the integral to 10 steps reduces the training time to 6 minutes.
contrasting
train_5873
Hence, interpretability methods tailored for text are quite sparse (Mudrakarta et al., 2018;Jia and Liang, 2017;Murdoch et al., 2018).
there are many papers criticizing the aforementioned methods by questioning their faithfulness, correctness (Adebayo et al., 2018;Kindermans et al., 2017) and usefulness.
contrasting
train_5874
From a fairness angle, our technique shares similarities with adversarial training (Zhang et al., 2018a;Madras et al., 2018) in asking the model to optimize for an additional objective that transitively unbiases the classifier.
those approaches work to remove protected attributes from the representation layer, which is unstable.
contrasting
train_5875
Similarly, comparing our method XIV with methods VII-IX, they all use the same term-based similarities.
our method achieves significantly better performance by using graphical decomposition.
contrasting
train_5876
This is reasonable, as using each keyword directly as a concept vertex provides more anchor points for article comparison .
community detection can group highly coherent keywords together and reduces the average size of CIGs from 30 to 13 vertices.
contrasting
train_5877
Traditional methods represent a text document as vectors of bag of words (BOW), term frequency inverse document frequency (TF-IDF), LDA (Blei et al., 2003) and so forth, and calculate the distance between vectors.
they cannot capture the semantic distance and usually cannot achieve good performance.
contrasting
train_5878
Finally, pre-training models such as BERT (Devlin et al., 2018) can also be utilized for text matching.
the model is of high complexity and is hard to satisfy the speed requirement in real-world applications.
contrasting
train_5879
By extending this model with a few additional features, such as the Kullback-Leibler divergence and related measures, we were able to reach the performance of the best verifiers submitted so far.
1 reality caught up with us when we applied our verifier to other authorship verification problems with little success.
contrasting
train_5880
Comparing with the distribution of market comments, few article titles use decimal.
writers of articles use more 4th-magnitude numerals than those in market comments.
contrasting
train_5881
Evaluation measures: Common LMTC evaluation measures are precision (P @K) and recall (R@K) at the top K predicted labels, averaged over test documents, micro-averaged F1 over all labels, and nDCG@K (Manning et al., 2009).
p @K and R@K unfairly penalize methods when the gold labels of a document are fewer or more than K, respectively.
contrasting
train_5882
HAN also performs better than CNN-LWAN for all and frequent labels.
replacing the CNN encoder of CNN-LWAN with a BIGRU (BIGRU-LWAN) leads to the best results, indicating that the main weakness of CNN-LWAN is its vanilla CNN encoder.
contrasting
train_5883
Systems also vary in how they incorporate feedback, such as "must- * Work performed at University of Maryland, College Park link" and "cannot-link" constraints (Andrzejewski et al., 2009;Hu et al., 2014), informed priors (Smith et al., 2018), or document labels .
evaluations of these systems are either not comparative (Choo et al., 2013;Lee et al., 2017) or compare against noninteractive models (Hoque and Carenini, 2015;Hu et al., 2014) or for only a limited set of refinements Xie et al., 2015).
contrasting
train_5884
For remove word, and merge topics, all methods provide good control (scores close to 1.0).
the informed prior methods, info-vb and info-gibbs, provide more control, for both the random (C Rand ) and good (C Good ) users, compared to const-gibbs.
contrasting
train_5885
such as add word and create topic.
const-gibbs supports defining token and document-level constraints, which ensure almost perfect control for refinements that require restricting certain words or documents, such as remove word and remove document.
contrasting
train_5886
SST in (Buch et al., 2017) effectively generates proposals in a single pass.
previous methods do not consider context information to produce nonoverlapped procedures.
contrasting
train_5887
On one hand, only the full model can generate eggs in the procedure (1.1) and (1.2), which is also an important ingredient entity in the ground-truth captions.
the ingredient bacon in groundtruth caption (c) is ignored by all models.
contrasting
train_5888
Similar to our work, some of these previous datasets have considered everyday routine actions (Fabian Caba Heilbron and Niebles, 2015;Real et al., 2017;Kay et al., 2017).
because these datasets rely on videos uploaded on YouTube, it has been observed they can be potentially biased towards unusual situations (Kay et al., 2017).
contrasting
train_5889
This type of linguistic reasoning is critical for interactions grounded in visually complex environments, such as in robotic applications.
commonly used resources for language and vision (e.g., Antol et al., 2015;Chen et al., 2016) focus mostly on identification of object properties and few spatial relations (Section 4; Ferraro et al., 2015;Alikhani and Stone, 2019).
contrasting
train_5890
In a final experiment, we learn to model language that describes images, and we find that conditioning on visual context improves segmentation performance in our model (compared to the performance when the model does not have access to the image).
in a baseline model that predicts boundaries based on entropy spikes in a character-LSTM, making the image available to the model has no impact on the quality of the induced segments, demonstrating again the value of explicitly including a word lexicon in the language model.
contrasting
train_5891
The choice makes sense, as reinforcement learning is well suited for tasks that require a set of actions to reach a goal.
the performance of these models have been sub-optimal when compared to the average human performance on the same task.
contrasting
train_5892
Comparing row 1 versus rows 2 and 3 we see that adding the SPEECH/TEXT task leads to a substantial improvement.
comparing rows 2 and 3 versus rows 6 and 11, it seems that the addition of TEXT/IMAGE task does not seem to have a major impact on performance, at least to the extent that can be gleaned from the experiments we carried out.
contrasting
train_5893
Recent advances in sequence modeling have highlighted the strengths of the transformer architecture, especially in achieving state-of-theart machine translation results.
depending on the upstream systems, e.g., speech recognition, or word segmentation, the input to translation system can vary greatly.
contrasting
train_5894
Although in principle, it is not necessary to mask for self-attentions in the encoder, in practical implementation it is required to mask the padding positions.
self-attention in the decoder only allows positions up to the current one to be attended to, preventing information flow from the left and preserving the auto-regressive property.
contrasting
train_5895
4): Finally, as in the vanilla Transformer decoder, the third sub-layer is a feed-forward network that processes the representation for the next n+1 layer: The three sources of information (image, object labels, and web entity labels) are treated symmetrically in the above equations.
the only "true" source of information in this case is the image, whereas the other labels are automaticallyproduced annotations that can vary both in quality and other properties (e.g., redundancy).
contrasting
train_5896
Due to the limited number of clip arts, the vocabulary is smaller than it would be for real images.
humans still use compositional language to describe clip art configurations and attributes, and make references to previous discourse elements in their messages.
contrasting
train_5897
After each nextutterance token, our neural Drawer model is used to take an action in the scene and the resulting change in scene similarity metric is used as a reward.
this reward scheme alone has an issue: once all objects in the scene are described, any further messages will not result in a change in the scene and have a reward of zero.
contrasting
train_5898
Image information is more helpful for PERS: we observe an increase of 10% in resolved blanks for del+obj as compared to del.
for PERS the textual context is still enough in the majority of the cases: models tend to associate men with sports or women with cooking and are usually right (see Figure 4 example (c)).
contrasting
train_5899
(2017) optimise image description generation to capture both semantic faithfulness and syntactic fluency by combining metrics with complementary properties.
the combination weights need to be engineered towards the task.
contrasting