id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5700
This provides further evidence that M-BERT uses a representation that is able to incorporate information from multiple languages.
m-BERT is not able to effectively transfer to a transliterated target, suggesting that it is the language model pre-training on a particular language that allows transfer to that language.
contrasting
train_5701
Understanding the expression of complaints in natural language and automatically identifying them is of utmost importance for: (a) linguists to obtain a better understanding of the context, intent and types of complaints on a large scale; (b) psychologists to identify human traits underpinning complaint behavior and expression; (c) organizations and advisers to improve the customer service by identifying and addressing client concerns and issues effectively in real time, especially on social media; (d) developing downstream natural language processing (NLP) applications, such as dialogue systems that aim to automatically identify complaints.
complaining has yet to be studied using computational approaches.
contrasting
train_5702
To the best of our knowledge, the only previous work that tackles a concept defined as a complaint with computational methods is by Zhou and Ganesan (2016) which studies Yelp reviews.
they define a complaint as a 'sentence with negative connotation with supplemental information'.
contrasting
train_5703
• Downgraders and Politeness Markers.
to intensifiers, downgrading modifiers are used to reduce the face-threat involved when voicing a complaint, usually as part of a strategy to obtain a reparation for the breach of ex-pectation (Meinl, 2013).
contrasting
train_5704
Several unigrams (error, issue, working, fix) and a cluster (Issues) contain words referring to issues or errors.
words regularly describing negative sentiment or emotions are not one of the most distinctive features for complaints.
contrasting
train_5705
The topical features such as the LIWC dictionaries (which combine syntactic and semantic information) and Word2Vec topics perform in the same range as the part of speech tags.
best predictive performance is obtained using bag-of-word features, reaching an F1 of up to 77.5 and AUC of 0.866.
contrasting
train_5706
The former one aims to generate single-token answers from automatically constructed pseudo-questions while the latter requires choosing from multiple answer candidates.
such unnatural settings make them fail to serve as the standard QA benchmarks.
contrasting
train_5707
Since the tweet contexts are short, there are only a small number of named entities to choose from, which could make the answer pattern easy to learn.
the neural models fail to perform well on the "Why" questions, and the results of neural baselines are even worse than that of the matching baseline.
contrasting
train_5708
Some researches (Heilman and Smith, 2010;Figueroa and Neumann, 2013) directly trained question ranking (QR) models via supervised learning, and used it to perform evaluation.
these models are always domainspecific and not interpretable since we cannot tell what makes a question get a high (low) score.
contrasting
train_5709
By using the "expected value of perfect information", they proposed a useful evaluation model.
our task significantly differs from it in two aspects: first, there is no correct answer for open-answered questions thus it is hard to tell which answer is "useful".
contrasting
train_5710
As mentioned above, if we control ∆t to make D rather small, the effect of time will be greatly reduced.
we may filter out too many data if making ∆t too close to 0.
contrasting
train_5711
As nouns are often topic words (occurred with adjectives and prepositions), it is better to contain less topics and ask one thing a time.
it is better to use more verbs and adverbs to make the question vivid.
contrasting
train_5712
When features in Section 3.2 are not used, the CNN model gets the best performance, which is not surprised.
adding these features greatly improves the performance of all statistical models, making SVM and RF significantly surpass CNN.
contrasting
train_5713
Sequential classifiers like LSTMs are good at learning temporal structure and are biased to use prior inputs to predict outputs (Eck and Schmidhuber, 2002).
when it comes to comparison tasks like stance classification in threaded discussions, each reply is made against a post or a response to a source post (see Fig.
contrasting
train_5714
Using an operation that reduces the children hidden layer matrix h to fixed dimension vector like in equation 8 or in equation 10 attempts to solve the problem.
these reduction operators have limitations e.g.
contrasting
train_5715
Unfortunately, the few existing human judgment datasets have been created as by-products of the manual evaluations performed during the DUC/TAC shared tasks.
modern systems are typically better than the best systems submitted at the time of these shared tasks.
contrasting
train_5716
It is worth remembering that JS-2 and R-2 both operate on bigrams which also explain their stronger connection.
in the high-scoring range (T), correlations are low and often negative.
contrasting
train_5717
The specificities of each metric are averaged-out putting the focus on the general trend.
when the metrics do not correlate, improvements according to one metric are rarely improvements for the other ones.
contrasting
train_5718
Intuitively, smaller populations and narrow scoring ranges can also lead to lower correlations.
(T) displays low correlations with 102 summaries per topic whereas (A) has strong correlations with 50 summaries per topic.
contrasting
train_5719
Our analysis is performed on TAC-2008 and TAC-2009 because they are benchmark datasets typically used for comparing evaluation metrics.
our approach can be applied to any dataset.
contrasting
train_5720
Across domains content overlap (R1) is˜50 points.
r2 is much lower indicating that there is abstraction, paraphrasing, and content selection in the summaries with respect to the input.
contrasting
train_5721
More irregular forms will tend to have a lower wug-test probability P(w | , σ, L − ) than most regular forms.
the absolute value of such a probability is not directly interpretable.
contrasting
train_5722
Many inflectional systems display syncretismthe morphological phenomenon whereby two slots with distinct morpho-syntactic tags may have an identical surface form.
to many models of inflectional morphology, we collapse syncretic forms of a word into a single paradigm slot, thereby assuming that every every surface form w in a paradigm is distinct.
contrasting
train_5723
As discussed above, we hold out whole lexemes, including all of their inflected forms during training.
derivational morphology presents a potential challenge for this approach.
contrasting
train_5724
Although this sample represents a reasonable degree of typological diversity, the Indo-European family is overrepresented in the UniMorph dataset, as is the case for most current multilingual corpora.
within the Indo-European family, we consider a diverse set of subfamilies: Albanian, Armenian, Slavic, Germanic, Romance, Indo-Aryan, Baltic, and Celtic.
contrasting
train_5725
The rate gate r controls how slow and fast-moving memory states are mixed inside the model.
to the model originally trained in Ororbia II et al.
contrasting
train_5726
"A dog" vs. "The dog" For the PassAct2 dataset, we observe that determiner information is retained well by most layers.
the shallow layers retain information better than the deeper layers.
contrasting
train_5727
"The happy child" vs. "The child" For the Act3 dataset, we observe that middle layers of most models (BERT, Multitask) retain the adjective information well.
surprisingly simple multitask model (lstm 1 forward layer accuracy = 0.89) retains adjective information better than BERT model (layer 7 accuracy = 0.84).
contrasting
train_5728
In logical negation, the negation operator has been succinctly described as a truth-functional operation, reversing the truth value of a sentence.
from a pragmatic point of view, the primary function of negation is to direct attention to an alternative meaning and can thus be, more generally, compared to our ability for counterfactual thinking (Hasson and Glucksberg, 2006).
contrasting
train_5729
For exam-ple, negation of action-related sentences or imperatives involves decreased activity in motor systems of the brain implicated in action semantics when compared to the affirmative context (Tettamanti et al., 2008;Tomasino et al., 2010).
overall reduced activation does not necessarily equate to a lack of information across patterns of activated or deactivated voxels in a brain region (Kriegeskorte et al., 2008).
contrasting
train_5730
Similarly, the Aff Verbs neural estimates showed significant correlations with the VERB model in the LPG (r = 0.04, p < 0.01), LIFG (r = 0.05, p < 0.01) and not the LMTP.
we did not find that Neg Verbs triggered any significant correlations with the VERB model in the ROIs tested.
contrasting
train_5731
This may provide a 'default' negation meaning (Papeo et al., 2016), as well as allow competing or cooperating semantic alternatives to emerge.
it is also possible that the results reflect a more 'categorical' representation of negation and that the current semantic models are merely not a suitable represenation for the negated meaning.
contrasting
train_5732
The noniconic language without markers is the most difficult to learn, as expected.
in listening mode we encounter again a preference for backward iconicity.
contrasting
train_5733
In listening mode, this agent shows the expected preference for markers in the free-order case (as the free-order language without markers is massively ambiguous, with most utterances mapping to multiple trajectories).
among the fixed-order languages, both backward and noniconic prefer redundant coding.
contrasting
train_5734
For fixed-order languages, we do not observe any change in accuracy or behavior in the listener direction (the last-generation child is perfectly parsing the initial language).
we observe in speaker mode a (relatively small) decrease in accuracy across generations, which, importantly, affects the most natural language (forward iconic without markers) the least, and the most difficult language (non-iconic without markers) the most (results are in Supplementary).
contrasting
train_5735
It is used for many downstream language processing tasks, e.g., coreference resolution, question answering, summarization, entity linking, relation extraction and knowledge base population.
most NER tools are designed to capture flat mention structure over coarse entity type schemas, reflecting the available annotated datasets.
contrasting
train_5736
Besides, Muis and Lu (2017) developed a gap-based tagging schema to capture nested structures.
these schemas should be designed very carefully to prevent spurious structures and structural ambiguity .
contrasting
train_5737
Methods from the first category focus on encoding high-order network interactions in a scalable fashion, such as LINE (Tang et al., 2015), DeepWalk (Perozzi et al., 2014).
models based on topological embeddings alone often ignore rich heterogeneous information associated with the vertices.
contrasting
train_5738
Usually, the model would take the risk of over-fitting if it is only optimized upon the target attributes due to unavoidable noises in the dataset.
the Multi-task learning implicitly increases training data of other relevant tasks having different noise patterns and can average these noise patterns to obtain a more general representation and thus improve generalization of the model.
contrasting
train_5739
The supervised approaches treat keyphrase extraction as a binary classification task, in which a learning model is trained on the features of labeled keyphrases to determine whether a candidate phrase is a keyphrase (Witten et al., 1999;Medelyan et al., 2009;.
the unsupervised approaches directly treat keyphrase extraction as a ranking problem, scoring each candidate using different kinds of techniques such as clustering (Liu et al., 2009), or graph-based ranking (Mihalcea and Tarau, 2004;Wan and Xiao, 2008).
contrasting
train_5740
(2019) proposed a title-guided Seq2Seq network to use title of source text to improve performance.
these methods did not consider the linguistic constraints of keyphrases.
contrasting
train_5741
Early detection and monitoring of adverse drug reactions (ADRs) can minimize the deleterious impact on patients and health-care systems (Hakkarainen et al., 2012;Sultana et al., 2013 For prevention, the drug safety organizations known as pharmacovigilance agencies conduct post-market surveillance to identify the drug's side effects post-release.
the majority of the existing ADE surveillance systems utilizes passive spontaneous reporting system databases, such as the Federal Drug Administration's Adverse Event Reporting System (FAERS) (Li et al., 2014).
contrasting
train_5742
This may correspond to mapping previously undiscovered adverse effect with a given drug, or discovering an unforeseen impact to a change in the manufacturing process.
extracting this information from the unstructured text poses several challenges as follows: • Multiple Context: Context carries an essential role in determining the semantic labels of the medical concepts.
contrasting
train_5743
Further works (Gurulingappa et al., 2012b;Benton et al., 2011;Harpaz et al., 2012a) utilized the lexicon-based approach to extract the ADRs.
these approaches are only restricted to a number of target ADRs.
contrasting
train_5744
For e.g., in the sentence-2 of Table-4, here 'could not sleep', 'move it behind my back' is also an ADR, in addition to 'pain in upper right arm'.
the first two ADRs are not labeled in the dataset.
contrasting
train_5745
According to empirical results illustrated in Table 2, our approach is able to give more accurate answers to multi-step problems, while the accuracy of single-step problems is lower.
three models have similar patterns in terms of performance for three types of signs.
contrasting
train_5746
This conflict may damage performance.
as shown in Figure 1, as more labels are removed, the advantage of Seq2Set over Seq2Seq continues to grow.
contrasting
train_5747
An updated, larger b kt will lead to a larger agreement value c kt between the t-th word and the k-th slot in the next iteration.
it assigns low c kt when there is inconsistency between p k|t and v k .
contrasting
train_5748
(2018) utilize a slot-gated mechanism as a special gate function in Long Short-term Memory Network (LSTM) to improve slot filling by the learned intent context vector.
as the sequence becomes longer, it is risky to simply rely on the gate function to sequentially summarize and compress all slots and context information in a single vector (Cheng et al., 2016).
contrasting
train_5749
An advantage of these approaches is that the aspect or opinion terms whose usage in a sentence follows some certain patterns can always be extracted.
it is labor-intensive to design rules manually.
contrasting
train_5750
Topic modeling approaches (Lin and He, 2009;Brody and Elhadad, 2010;Mukherjee and Liu, 2012) are able to get coarse-grained aspects such as food, ambiance, service for restaurants, and provide related words.
they cannot extract the exact aspect terms from review sentences.
contrasting
train_5751
Dataset #ATER #OTER #EAT #EOT SE14-R 431 618 1,453 1,205 SE14-L 157 264 670 665 SE15-R 133 193 818 578 Compared with the other approaches, RI-NANTE only fails to deliver the best performance on the aspect term extraction part of SE14-L and SE15-R. On SE14-L, DE-CNN performs better.
our approach extracts both aspect terms and opinion terms, while DE-CNN and HAST only focus on aspect terms.
contrasting
train_5752
On the other hand, rewriting the adjunct tokens can smooth the generated data and expand their diversity.
since there is no explicit guide, this step can also introduce unpredictable noise, making the generation not fluent as expected.
contrasting
train_5753
2 Data-driven end-to-end models trained on that dataset could implicitly bias towards predicting PERSON for most occurrences of Clinton even under some contexts when it refers to a location.
for frequently studied languages such as English, people have already collected dictionaries or lexicons consisting of long lists of entity names, known as gazetteers.
contrasting
train_5754
In the online setting, at each iteration k, only the k + 1th column of table T needs to be evaluated.
to Lemma 1 in the offline setting, where only the elements in the k + 1-th column below index k need to be computed, all elements of the k + 1-th column need to be evaluated in the online setting.
contrasting
train_5755
The term α k−1 k,k+1 is whatever character is on the transition from state k to k + 1.
α k−1 k+1,k is the set of paths that take state k+1 to state k without passing through states higher than k. Lemma 5.
contrasting
train_5756
The previous methods that introduce additional operations cannot adopt such parser directly.
although post-processing approach can use any parser in pre-processing, our approach outperforms the post-processing approach, even if the pre-processing parser is assumed to always generate gold PTB trees.
contrasting
train_5757
(2013) first compute the n best constituency trees using a probabilistic context-free grammar, convert those into dependency trees using a dependency model, compute a probability score for each of them, and finally rerank the most plausible trees based on both scores.
these methods are complex and intended for statistical parsers.
contrasting
train_5758
Attention mechanism is firstly applied in Computer Vision (CV) where pixels are the basic units.
in NLP, the minimum unit is not word but sense.
contrasting
train_5759
Normally, various aspects of semantics are entangled in word embeddings (Bengio et al., 2003;Mikolov et al., 2013).
only some of the aspects are needed in specific tasks and other redundant aspects can be regarded as noise.
contrasting
train_5760
(2016) proposed a novel type of hard attention and apply it to improve the interpretability of models.
the accuracy is not improved.
contrasting
train_5761
11 https://www.tensorflow.org/hub 12 Note that the encoder architectures similar to the ones used by USE can also be used in the Reddit pretraining phase in lieu of the architecture shown in Figure 2.
the main goal is to establish the importance of target response selection fine-tuning by comparing it to direct application of state-of-the-art pretrained encoders, used to encode both input and responses in the target domain.
contrasting
train_5762
The comparison of two fine-tuning strategies suggests that the simpler FT-DIRECT fine-tuning has an edge over FT-MIXED, and it seems that the gap between FT-DIRECT and FT-MIXED is larger on bigger datasets.
as expected, FT-DIRECT adapts to the target task more aggressively: this leads to its degraded performance on the general-domain Reddit response selection task, see the scores in parentheses in Table 3.
contrasting
train_5763
Humans provided instructions only about half of the time, and devoted more energy to providing higher-level descriptions of the target, responding to the Builder's actions and queries, and rectifying mistakes.
even the improved model failed to capture this, mainly generating instructions even if it was inappropriate or unhelpful to do so.
contrasting
train_5764
With such a memory mechanism, SLU model can retrieve context knowledge to reduce the ambiguity of the current utterance, contributing to a stronger SLU model.
the memory consolidation, a well-recognized operation for maintaining and updating memory in cognitive psy- chology (Sternberg and Sternberg, 2016), is underestimated in previous models.
contrasting
train_5765
This dataset was collected with the Wizardof-Oz scheme (Wen et al., 2017) and consists of 3,031 multi-turn dialogues in three distinct domains, and each domain has only one intent, including calendar scheduling, weather information retrieval, and point-of-interest navigation.
all dialogue sessions of KVRET are single domain.
contrasting
train_5766
We design the turn-level attention to score the dialogue turns explicitly, so the more salient turns will obtain higher scores, which is similar to (Hsu et al., 2018).
instead of calculating the sentence-level attention using a separate recurrent component, we directly obtain the turn representations H turn by collecting hidden states from H with the turn-level segment position indices where m is the turn number of the dialogue content.
contrasting
train_5767
Turn Segmentation: In a smooth conversation, one turn is an adjacency pair of two utterances from two speakers (Sacks et al., 1974).
in real scenarios, the conversation flow is often disrupted by verbal distractions such as interlocutor interruption, back-channeling, self-pause and repetition (Schlangen, 2006).
contrasting
train_5768
Natural language understanding is to extract the core semantic meaning from the given utterances, while natural language generation is opposite, of which the goal is to construct corresponding sentences based on the given semantics.
such dual relationship has not been investigated in literature.
contrasting
train_5769
Natural language has the intrinsic sequential structure and temporal dependency, so modeling the joint distribution of words in a sequence by such autoregressive property is logically reasonable.
slot-value pairs in semantic frames do not have a single directional relationship between them, while they parallel describe the same sentence, so treating a semantic frame as a sequence of slot-value pairs is not suitable.
contrasting
train_5770
With the rise of deep learning approaches, several neural belief trackers (NBT) have been proposed and improved the performance by learning semantic neural representations of words Mrkšić and Vulić, 2018).
the scalability still remains as a challenge; the previously proposed methods either individually model each domain and/or slot (Zhong et al., 2018;Ren et al., 2018;Goel et al., 2018) or have difficulty in adding new slot-values that are not defined in the ontology Nouri and Hosseini-Asl, 2018).
contrasting
train_5771
The generative method try to generate positive and negative examples from known classes by using adversar-ial learning to augment training data.
the method does not work well in the discrete data space like text, and a recent study (Nalisnick et al., 2019) suggests that this approach may not work well on real-world data.
contrasting
train_5772
It's practical that a conversation takes place under a background, meanwhile, the query and response are usually most related and they are consistent in topic but also different in content.
little work focuses on such hierarchical relationship among utterances.
contrasting
train_5773
their values are known during training.
we assume that the probability of the output y is conditioned on a latent tree T ∈ T (x), a variable that is not observed during training: it must be inferred from the data.
contrasting
train_5774
Our latent dependency model slighty improves (+0.8) over the CoreNLP baseline.
we observe that while our baseline is better than the one of Niculae et al.
contrasting
train_5775
(2019) proposed an aggressive inference network training.
their training algorithms are computationally expensive since they require backpropagating through the decoder or the encoder multiple times.
contrasting
train_5776
A successful partial-input baseline indicates that a dataset contains artifacts which make it easier than expected.
examples where this baseline fails are "hard" (Gururangan et al., 2018), and the failure of partial-input baselines is considered a verdict of a dataset's difficulty (Zellers et al., 2018;Kaushik and Lipton, 2018).
contrasting
train_5777
Each combination uniquely identifies a label, e.g., A in the premise and B in the hypothesis equals Entailment.
a single code word cannot identify the label.
contrasting
train_5778
Figure 2 shows the BLEU scores on IWSLT De→En dataset of each method, from which we can see that our method can observe a consistent BLEU improvement within a large probability range and obtain a strongest performance when γ = 0.15.
other methods are easy to lead to performance drop over the baseline if γ > 0.15, and the improvement is also limited for other settings of γ.
contrasting
train_5779
It turns out that in all but one case, the more words, the better the performance.
we did not get statistically significant results with class descriptors with dimensions higher than 100.
contrasting
train_5780
As shown, the gaps of success rates between the models are not very large, because all models can give pretty high success rate.
as expected, our proposed MHA provides lower perplexity (PPL) 1 , which means the examples generated by MHA are more likely to appear in the corpus of the evaluation language model.
contrasting
train_5781
Existing approaches for explainable machine learning systems tend to focus on interpreting the outputs or the connections between inputs and outputs.
the fine-grained information (e.g.
contrasting
train_5782
If our model categorizes A as "GOOD" and it tells that the quality of A is "HIGH", the practicality is "HIGH" and the price is "LOW", we can regard these values of attributes as good explanations that illustrate why the model judges A to be "GOOD".
if our model produces the same values for the attributes, but it tells that A is a "BAD" product, we then think the model gives bad explanations.
contrasting
train_5783
For example, given a review sentence, "The product is good to use", we may not be sure if the product should be rated as 5 stars or 4 stars.
if we see that the attributes of the given product are all rated as 5 stars, we may be more convinced that the overall rating for the product should be 5 stars.
contrasting
train_5784
For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research.
adversarial 1 All code for our defenses, attacks, and baselines is available at https://github.com/danishpruthi/ Adversarial-Misspellings Original A triumph, relentless and beautiful in its downbeat darkness + Swap A triumph, relentless and beuatiful in its downbeat darkness -Drop A triumph, relentless and beautiful in its dwnbeat darkness -+ Defense A triumph, relentless and beautiful in its downbeat darkness + + Defense A triumph, relentless and beautiful in its downbeat darkness + misspellings constitute a longstanding real-world problem.
contrasting
train_5785
Intuitively, one might suppose that word-piece and character-level models would be more robust to such attacks given they can make use of the remaining context.
we find that they are the more susceptible.
contrasting
train_5786
With evaluations on 14 treebanks, we empirically show that global output-structured models can generally obtain better performance, especially on the metric of sentence-level Complete Match.
probably because neural models already learn good global views of the inputs, the improvement brought by structured output modeling is modest.
contrasting
train_5787
Overall, the global models 4 perform better consistently, especially on the metrics of Complete Match, showing the effectiveness of being aware of global structures.
the performance gaps between global models and local models are small.
contrasting
train_5788
Several meta-analyses and empirical studies have shown the high efficacy and success of MI in psychotherapy (Burke et al., 2004;Martins and McNeil, 2009;Lundahl et al., 2010).
mI skills take practice to master and require ongoing coaching and feedback to sustain (Schwalbe et al., 2014).
contrasting
train_5789
The biggest improvements are for categorizing therapist codes (Table 9), especially for the RES and REC.
increasing the window size beyond 8 does not help to categorize client codes (Table 8) or forecasting (in appendix).
contrasting
train_5790
However, the knowledge is grounded in images.
we focus on knowledge grounded in videos, which is more complex, considering the large feature space spanning across multiple video frames and modalities that need to be understood.
contrasting
train_5791
Given the keyword Basketball of the current turn and its closeness score (0.47) to the target Dance, the only valid candidate keywords for the next turn are those with higher target closeness, such as Party with a closeness score of 0.62.
transitioning from Basketball to Sport is not allowed in the context as it does not move towards the target.
contrasting
train_5792
Traditional persuasive dialogue systems have been applied in different fields, such as law (Gordon, 1993), car sales (André et al., 2000), intelligent tutoring (Yuan et al., 2008).
most of them overlooked the power of personalized design and didn't leverage deep learning techniques.
contrasting
train_5793
Distinct-1 and distinct-2 are widely used in the literature (Li et al., 2016a;Shen et al., 2018a;Xu et al., 2018b), measuring the ratio of unique unigrams/bigrams to the total number of unigrams/bigrams in a set of responses.
they are very sensitive to the test data size, since increasing the number of examples in itself lowers their value.
contrasting
train_5794
Most metrics show a similar trend of increasing until 100k steps, and then stagnating (see Appendix A.3 for more figures).
validation loss for the same training reaches its minimum after about 10-20k steps ( Figure 5).
contrasting
train_5795
entropy, divergence, dis-tinct), randomly selected responses achieve similar values as the ground truth, which is expected.
on embedding metrics, coherence, and BLEU, random responses are significantly worse than those of any model evaluated.
contrasting
train_5796
These word embeddings were learned from unsupervised Neural Language Modelling (NLM) trained on fixed-length contexts.
by recasting the same word types across different sense-inducing contexts, these representations became insensitive to the different senses of polysemous words.
contrasting
train_5797
As we explained in the introduction, word embeddings are limited by meaning conflation around word types, and reduce NLM to fixed representations that are insensitive to contexts.
with fastText (Bojanowski et al., 2017) we're not restricted to a finite set of representations and can compositionally derive representations for word types unseen during training.
contrasting
train_5798
In a different line of work, Vilnis and McCallum (2015) proposed representing words as Gaussian distributions to embed uncertainty in dimensions of the embedding to better capture concepts like entailment.
athiwaratkun and Wilson (2017) argued that such a single prototype model can't capture multiple distinct meanings and proposed W ord2GM to learn multiple Gaussian embeddings per word.
contrasting
train_5799
The SCAS model simply uses the sum of all the sememes' embeddings of a constituent as the external information.
a constituent's meaning may vary with the other constituent, and accordingly, the sememes of a constituent should have different weights when the constituent is combined with different constituents (we show an example in later case study).
contrasting