id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_93800
We formulate label assignment problem as a resource allocation problem, where we maximize total value of assigned labels.
exploiting this directional and conditional dependency between the labels should allow us to predict a more coherent set of labels.
neutral
train_93801
Therefore MLL algorithm should take advantage of this correlation embedded in the training data to always infer the label tree when mountain is one of the label.
• Yahoo News MLL Dataset (English) 1 : This is one of the few publicly available large scale datasets for MLL.
neutral
train_93802
Our techniques involve solving simple constraint optimization problems over the outputs produced by any MLL approach and the result is a refined version of the input prediction by removing spurious labels and reordering the labels by utilizing the additional label-dependency side information.
we are left with coherent labels.
neutral
train_93803
We plan to investigate these matters in more detail in the future.
5 The first one 4 Experiments were done in a machine with an Intel Xeon E5-2687W 3.10GHz as CPU and a GTX TITAN X as GPU.
neutral
train_93804
For instance, a bag-of-words (BOW) model uses one-hot encoding as vectors and it is still a strong baseline for many tasks.
averaging vectors discards any word order information from the original text, which can be fundamental for more involved NLP problems.
neutral
train_93805
Unlike standard evaluation for IR, all the metrics are positively correlated in general.
the false negatives (FN) generated by the segmenter can be obtained by subtracting tP from PP, total number of words return by the segmenter.
neutral
train_93806
Most stateof-the-art systems require tens of thousands of annotated sentences in order to obtain high performance.
we experiment on 15 languages from the crosslingual named entity dataset described in Pan et al.
neutral
train_93807
Another large scale dataset, containing user generated image descriptions for 1M images, created by quering Flickr.
we follow Black's (1979) observation that a metaphor is essentially an interaction between two terms, creating an implication-complex to resolve two incompatible meanings.
neutral
train_93808
AAS and MAS deal with many-to-many and oneto-many word alignments, respectively.
given H identified using the Hungarian algorithm (Kuhn, 1955), HAS is computed by averaging the similarities between embeddings of the aligned pairs of words.
neutral
train_93809
As expected, we get much higher gains for Urdu compared to Telugu.
to demonstrate this, we obtain translations for embeddings of 3 languages 2 and show the results in table 1.
neutral
train_93810
We do 10 iterations of retrofitting process for all our experiments because 10 iterations are enough for convergence (Faruqui et al., 2015) and also using the same value for all experiments avoids overfitting.
w i being the weights, α, β are set as : Further reduction in weights of noisy words can be done by taking powers of cosine similarity.
neutral
train_93811
Roth and Lapata (2016) reported that F 1 decreased by 10 points or more when path embedding was excluded.
these values are the means of five different models trained with the same training data and hyperparameters.
neutral
train_93812
Annotations with more than 1 token were split into a sequence of tokens (e.g., BAB/BUP to BAB, BUP).
the audio was manually transcribed and aligned with videos; the gestures were manually annotated and segmented according to video and audio recordings.
neutral
train_93813
As with the results on gesture semantics, this suggests that multimodal meaning and meaning of iconic gesture relies heavily on speech, in accor- Figure 5: Featuring ranking according to coefficient values (weights assigned to the features, see (Lücking et al., 2010) for the details of the annotation scheme).
it remains unclear how to computationally derive the semantics of iconic gestures and build corresponding multimodal semantics together with the accompanying verbal content.
neutral
train_93814
a simple way to have a flexible non-linear model over the data.
we follow closely the definition of GPs in Rasmussen and williams (2006).
neutral
train_93815
Many efforts have been made to construct multi-dimensional affective lexicons, such as ANEW for English (Bradley and Lang, 1999;Warriner et al., 2013), CVAW for Chinese (Yu et al., 2016), and other languages (Montefinese et al., 2014;Imbir, 2015).
another method is to use sentiment related features, such as total count of sentiment tokens, total sentiment score, maximal sentiment score, etc.
neutral
train_93816
The result is shown in Table 2.
between the VAD representation and word embedding, it is not clear which one is more effective for sentiment analysis.
neutral
train_93817
Recognizing the commonalities between ACC and ATE task can boost the performance of both of them.
the system with window approach cannot be jointly trained with that using sentence window approach.
neutral
train_93818
Although the majority approach does not model inter-relation and ambiguity of the signals, we assume that more signals, and thus longer prefixes, give better or the same prediction 6 .
this work investigates whether DRs can be identified incrementally based on human performance.
neutral
train_93819
All recorded dialogues with the total length of 21 hours have been manually transcribed and annotated with speech acts and semantic labels at each turn level.
semantic Label Given a sequence of annotated intent tags and associated attributes for each history utterance, we employ a BLsTM to model the explicit semantics: where intent t is the vector after one-hot encoding for representing the annotated intent and the attribute features.
neutral
train_93820
Neural conversation systems, typically using sequence-to-sequence (seq2seq) models, are showing promising progress recently.
λ in MMR scores was empirically set to 0.5; the beam size was 30.
neutral
train_93821
These methods add a diversity punishment term to the scoring function in beam search; it is hard to balance this term with other components in the function.
the fine-grained selection improves the diversity and retains the quality of subsequences at each time of decoding, so the generated replies are of good quality and diversity.
neutral
train_93822
It runs many services simultaneously and the dynamic combination of the 2 GHz Intel Core i7 processor and 16 GB RAM enables the acute ability to focus on concurrent tasks with minimal performance degradation.
we compare these methods in experiments: Basic, +Templated, +Structured and Full are ranked based on basic features, basic+templated features, basic+structured features and ba-sic+templated+structured features respectively; wordCount and AttriCount are rankers which sort candidates in the descending order of word count and attribute count respectively; OracleBLEU is an oracle ranker which always chooses the top candidate in term of BLEU as the answer (can be seen as the upper bound of ranking).
neutral
train_93823
The proposed structured information and templated information are helpful for deciding what to say and how to say for description generation.
we extract attributes which mentioned in a reference description, and compare them with those in its corresponding generated descriptions.
neutral
train_93824
Another solution is to use optimization techniques such as integer linear programming (ILP) to infer the scores of sentences with consideration of the quality of a whole summary (McDonald, 2007).
second, we propose a novel deep learning network for estimating Q-values used in RL.
neutral
train_93825
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2015R1C1A2A01051685)
the differences of ROUGE scores between the models are relatively small.
neutral
train_93826
They have also been developed for the update summarization task, with work such as (Li et al., 2015) about the weighting of concepts.
we adopted this method to represent sentences and used only the embeddings of unigrams since bigrams and phrases are generally not well covered by the existing pre-trained embeddings 1 .
neutral
train_93827
We only considered sets A and B for all the datasets.
in practice, most proposed similarity measures for AS are subject to a time efficiency problem which tends to increase with the quality of the similarity measure.
neutral
train_93828
In ML estimation,α m estimated from document D is determined asα Thus, ML estimated generative probability of w in the hyperspherical QLMs is formulated as where v is a normalized word embedding of v. In MAP estimation,α m estimated from document D is determined aŝ where C is document collection in which all retrieved documents are included.
wCwV used cosine distance between two document vectors composed by adding word embeddings with IDF weights (Brokos et al., 2016).
neutral
train_93829
These setups are equivalent to the evaluation in Salakhutdinov and Hinton (2009); Larochelle and Lauly (2012).
the word similarity is calculated as where w is the word embedding normalized to a unit length for w. This paper proposes a novel probabilistic IR method based on hyperspherical QLM that leverages pre-trained word embeddings as random variables for directional probabilistic models.
neutral
train_93830
Candidate Pruning: Calculation of score for all triplets from the dataset is an memory and time wise prohibitive operation.
question Encoding question is represented as sequence of words (x 1 , x 1 , ..., x |q| ).
neutral
train_93831
We use SimpleQuestions 3 dataset introduced by (Bordes et al., 2015) for training and testing purposes.
the real challenge for leveraging such knowledge in practical applications consists of mapping natural language questions to their corresponding entries in thus enhanced KBs.
neutral
train_93832
Visualization reveals that it is much easier for neural net models to identify and concentrate on the relevant regions of contexts using the dependency chain representation, which further verifies that additional event words in a sentential context are crucial to predict target event temporal status.
all the models were trained using RMSProp optimizer with the initial learning rate 0.001 and the same random seed.
neutral
train_93833
Our proposed model uses LSTM model (Gers, 2001) as the basic classifier.
lSTM has no obvious model advantage in this set of training data.
neutral
train_93834
Other text normalization task In this study, we evaluated our methods with Japanese dialect data.
when we sample dialect pattern m s from MatchedList, we use two types of p(m s |m t ).
neutral
train_93835
Then, the (normalization) probability of t given dialect sentence s can be written as where θ represents a set of all model parameters in the encoder-decoder model, which are determined by the parameter-estimation process of a standard softmax cross-entropy loss minimization using training data.
we set p(m s |m t ) = F ms,mt /F mt , where F ms,mt is the joint frequency of (m s , m t ) and F mt is the frequency of m t in the extracted morphological conversion patterns of training data.
neutral
train_93836
General purpose language identification tools such as TextCat (Cavnar and Trenkle, 1994) and langid.py (Lui and Baldwin, 2012) can identify 50-100 languages with accuracy of 86-99%.
by introducing additional texts of linked users, the accuracy of the proposed model improved by 1.67-5.56, except for Portuguese.
neutral
train_93837
As expected, the performances were fundamentally inferior to the shared layers architecture.
for language variety identification, sparse traditional models have shown stronger performance than deep neural models (Medvedeva et al., 2017).
neutral
train_93838
An explanation for such behaviour may be found on the fact that long sentences are also difficult translations.
with the attention mechanism (Bahdanau et al., 2014), a decoder is used to decide which part of the source sentence to pay attention to and predict corresponding word representation at time t together with history predictions before time t. Then, a softmax is used to restore the word representation to natural target words.
neutral
train_93839
That is, all the instances are regarded as equal and used to train the NMT model equally.
note that the procedure restarts using the entire training set every 3 epochs.
neutral
train_93840
We test several normalisation strategies: • by batch size (number of training instances) • by (target) sentence length • without normalisation (longer sentences were always assigned a higher perplexity) Experimental results show no significant performance differences for any of the strategies.
multiple GPUs or T-PUs 1 are needed which requires additional replication and combination costs.
neutral
train_93841
Training efficiency has been a main concern by many researchers in the field of NMT.
we emulate a human spending additional energy on learning complex concepts.
neutral
train_93842
SrcVoice represents the voice of the source side.
to be Active the morphology and the crystallinity of the dots depended on the temperature .
neutral
train_93843
In bidirectional attention models, interface vectors are transformed vectors of h t with alignment model and target word, but we can still use h f t h b t .
if the optimal distance is sufficiently small, then this method will guide the training in early updates and preserve the true optimal with respect to cross entropy with restriction of generating negative sentences.
neutral
train_93844
We rely on the article's MeSH terms only to select RCTs.
this abstract was taken from (Krogh et al., 2016) and several sentences have been removed for the sake of conciseness.
neutral
train_93845
This also applies for CVs that have separate section headers for identification, such as Name, Email etc.
multiple industrial solutions, such as Text Kernel 1 , Burning Glass 2 and Sovren 3 , have attempted to solve the problem at hand, and are offered as commercial products.
neutral
train_93846
For individuals, it is possible to add value by designing CV improvement and organization tools, enabling them to create more effective CVs specific to their career objectives as well as maintain the CVs easily over time.
for sections that are intuitively more complex, show some (meaningful) confusions across classes.
neutral
train_93847
We see these results as supporting the assumption that natural user interfaces should respond to multimodal input, where possible, rather than just language alone.
there are 5371 sketch/photograph ensembles in our image retrieval evaluations.
neutral
train_93848
The decoder predicts a word conditioned on the correct word sequence (y t−1 1 ) during training, whereas it does with the predicted word sequence (ŷ t−1 1 ) at test time.
the key difference from MLE is that the reward is not restricted to tokenlevel accuracy.
neutral
train_93849
We propose a novel method to analyze Japanese review documents with exploiting the visual information of ideograms and logograms.
we chose 2,709 characters from 3,630 characters.
neutral
train_93850
Despite this, we demonstrate improvements on NER F-scores with our multi-task model.
we are interested in leveraging recent advances in learning BWE from comparable corpora (Hermann and Figure 1: Our multi-task framework, which trains bilingual word embeddings from comparable corpora while optimizing an NER objective on the high-resource language.
neutral
train_93851
To resemble the existing CW resources (Shardlow, 2013;Paetzold and Specia, 2016a;Kauchak, 2013), we collected 500 sentences from Wikipedia.
the drop in the F-scores of the NEWS and WIKIPEDIA systems when moving from native to non-native datasets, could probably be attributed to a slightly lower inter-annotator agreement among nonnative than native annotators.
neutral
train_93852
Most LS systems focus on simplifying news articles (Aluísio et al., 2008;Carroll et al., 1999;Saggion et al., 2015;Glavaš andŠtajner, 2015).
cross-group-genre Results (Table 6): Similar to the the cross-group experiments, the best results are achieved when tested on the datasets annotated by native speakers, indicating once again that the F-score is highly influenced by the inter-annotator agreement on the test set.
neutral
train_93853
Since a wide variety of users, ranging from young to old, and from female to male, participates in an SNS, the learned responses are not guaranteed to be stylistically consistent (e.g.
this is the first work to focus on creating a stylistically consistent end-to-end DRG system and evaluating stylistic consistency in neural dialog response generation studies.
neutral
train_93854
automatically collecting polite utterances from a large Twitter corpus with a filter, or generating stylistically consistent responses with a smaller or even no specific style corpus.
this may not be a good strategy, because the top N d most frequent words in a dialog corpus do not necessarily include words that are potentially useful for generating stylistically consistent responses.
neutral
train_93855
Our model can also accurately explain many known acronyms.
instance: "that sir right there, is being quite adoucheous" Target: adoucheous Reference: a person acting in a conformative manner that causes social upset and violence.
neutral
train_93856
The importance property of the summary considers how much relevant information present in a summary.
algorithm 1 a Greedy algorithm for maximizing the objective function Require: a minimization LP in standard form.
neutral
train_93857
Recently, summarization has also been considered as a submodular function maximization (Lin andBilmes, 2010, 2011;Dasgupta et al., 2013) where greedy algorithms were adopted to achieve near optimal summaries.
to model this property, we introduce a new monotone nondecreasing submodular function based on the atomic concept.
neutral
train_93858
Since the scoring function f (s) of our proposed summarizer is non-decreasing monotone submodular, we thus use the following greedy algorithm to obtain the near optimal solution.
abstractive summarization is a way of natural language generation and using this approach, it is possible to produce human-like summaries (Rush et al., 2015;Chopra et al., 2016;Wang and Ling, 2016).
neutral
train_93859
We emphasize that question swapping is done only for the groups of questions used for training; the development and test triples are kept the same.
following the notation of Bahdanau et al.
neutral
train_93860
The SemEval dataset (Nakov et al., 2016) was created from questions posted on the Qatar Living forum and has a different distribution and structure.
the results now show table 5: Results w/ and w/o training on weighted external triples using LM-based selection.
neutral
train_93861
Further heat maps demonstrating this can be found in the appendix.
we hypothesize that leftto-right works consistently well due to the left-toright nature of the English language.
neutral
train_93862
This effect is greatly exacerbated when performed independently on each instance.
attention mechanisms do provide a look into the inner workings of a model, as they produce an easily-understandable weighting of hidden states.
neutral
train_93863
In the NER task, agreement is measured across the entire sequence.
we learn all word and character level features from scratch, initializing with random embeddings.
neutral
train_93864
sampled data to evaluate the comparative effectiveness of different AL strategies.
we simulate pool-based AL using labeled benchmark datasets by withholding document labels from the models.
neutral
train_93865
In our evaluation, we compare the relative performance (accuracy or F1, as appropriate for the task) of the successor model trained with corpus D A to the scores achieved by training on comparable amounts of native and i.i.d.
for each test instance we calculate its cosine similarity to all other test instances, inducing a ranking.
neutral
train_93866
Future work will involve incorporating a diverse set of domain specific KBs for specialized NLP applications.
large pretrained models such as ElMo (Peters et al., 2018), GPT (Radford et al., 2018), and BERT (Devlin et al., 2019) have significantly improved the state of the art for a wide range of NlP tasks.
neutral
train_93867
We experiment with integrating both WordNet (Miller, 1995) and Wikipedia, thus explicitly adding word sense knowledge and facts about named entities (including those unseen at training time).
this results in general purpose knowledge enhanced representations that can be applied to a wide range of downstream tasks.
neutral
train_93868
The best-performing PC static embeddings belong to the first layer of BERT, although those from the other layers of BERT and ELMo also outperform GloVe and FastText on most benchmarks.
in all layers of all three models, the contextualized word representations of all words are not isotropic: they are not uniformly distributed with respect to direction.
neutral
train_93869
Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks.
for all three contextualizing models, PC static embeddings created from lower layers are more effective those created from upper layers.
neutral
train_93870
ELMo creates contextualized representations of each token by concatenating the internal states of a 2-layer biLSTM trained on a bidirectional language modelling task (Peters et al., 2018).
recall from Definition 3 that the maximum explainable variance (MEV) of a word, for a given layer of a given model, is the proportion of variance in its contextualized representations that can be explained by their first principal component.
neutral
train_93871
More recent work, namely deep neural language models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018), * Work partly done at the University of Toronto.
static representations are much easier to deploy.
neutral
train_93872
Part of the appeal of cosine similarity perhaps lies in the simple geometric interpretation behind it.
while the abundance of various pooling operations may be intimidating, the resulting vectors are always subject to the many tools of univariate statistics.
neutral
train_93873
Certainly, these approaches are empirically attractive: not only are they very simple computationally (e.g.
each row w (i) of W is a D-dimensional word vector.
neutral
train_93874
By the linearity of expectation, we have that , and so the mean across w mean will also be close to zero at least for small k. In practice, this seems to hold even for moderate k in naturally occurring sentences, as seen in Figure 1.
as embeddings are ultimately just arrays of numbers, we are free to take alternative viewpoints other than the geometric ones, if they lead to illuminating insights or strong-performing methods.
neutral
train_93875
Our model fares particularly well on the disambiguation of nouns and verbs.
(2018) proposed fastSense, a model inspired by fastText which -rather than predicting context words -predicts word senses.
neutral
train_93876
A typical reward function on policy learning consists of a small negative penalty at each turn to encourage a shorter session, and a large positive reward when the session ends successfully if the agent completes the user goal.
u: From Bishops Stortford to Cambridge.
neutral
train_93877
Ubuntu Corpus consists of English multi-turn conversations about technical support collected from chat logs of the Ubuntu forum.
we use the final state of GRU output h L as features and apply a single-layer perceptron to obtain score: where W and b are learnt parameters, σ(•) is sigmoid activation function.
neutral
train_93878
In multi-turn dialogues, ensuring that the model is able to distinguish among turns is essential, especially when multiple emotion are present in different turns.
in order to understand whether or how MoEL can effectively improve other baselines, learn each emotion, and properly react to them, we conduct three different analyses: model response comparison, listener analysis, and visualization of the emotion distribution p. Model response comparison The top part of Table 4 compares the generated responses from MoEL and the two baselines on two different speaker emotional states.
neutral
train_93879
2) The second assumes that the emotion to condition the generation on is given as input, but we often do not know which emotion is appropriate in order to generate an empathetic response.
i hate when that happens Figure 4: The visualization of attention on the listeners: The left side is the context followed by the responses generated by MoEL.
neutral
train_93880
This is due to the first part of the context which conveys a sad emotion.
as shown in Figure 2, our context embedding E C is the positional sum of the word embedding E W , the positional embedding E P and the dialogue state embedding E D .
neutral
train_93881
In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation.
the major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever.
neutral
train_93882
the neighbourhood speaker information for each utterance in the graph.
a number of models has also been proposed for emotion recognition in multimodal data i.e.
neutral
train_93883
The overall architecture of our KET model is illustrated in Figure 2.
the concept representation c(t) ∈ R d for token t is computed as where c k ∈ R d denotes the concept embedding of c k and α k denotes its attention weight.
neutral
train_93884
Multiple emotion detection from texts has been previously addressed in (Zhou et al., 2016) which predicted multiple emotions with different intensities based on emotion distribution learning.
to further validate the effectiveness of eventdriven attention components, we compare IRER-EA with two sub-networks based on our architectures.
neutral
train_93885
Ren-CECps corpus (Blogs) (Quan and Ren, 2010) is a Chinese data set containing 1,487 blogs annotated with eight basic emotions from writer's perspective, including Anger, Anxiety, Expect, Hate, Joy, Love, Sorrow and Surprise.
existing studies often ignore the latent event information.
neutral
train_93886
To generate convincing and diverse justifications, we developed two models: (1) Ref2Seq which leverages historical justifications as references during generation; and (2) ACMLM, which is an aspect conditional model built on a pre-trained masked language model.
if we encounter a fine-grained aspect, we will replace it with a [MASK] token 30% of the time; while for other words, we will replace them with a [MASK] token 15% of the time.
neutral
train_93887
In Table 5, we compare the sentiment prediction results of MILNET, CAMIL r , CAMIL s and CAMIL f ull .
in order to predict service satisfaction and identify sentiments of all customer utterances with available satisfaction labels, we propose a CAMiL model based on multiple instance learning approach.
neutral
train_93888
In addition, we propose a context clue matching mechanism (CCMM) to match any customer utterance with the most related server utterances.
for each customer utterance C i , we give the attention weights βs t on all the server utterances (see formula 10).
neutral
train_93889
Figure 1 illustrates that satisfaction polarity ("unsatisfied") is mostly embedded in the last few customer utterances (i.e., u 7 , u 9 and u 10 ) 3 .
we use back propagation to calculate the gradients of all the model parameters, and update them with Momentum optimizer (Qian, 1999).
neutral
train_93890
Table 3 shows the comparison with previous work on the PGR testset, where our models are significantly better than the existing models.
using the 1-best dependency tree can result in low recall given an imperfect parser.
neutral
train_93891
After running, Louvain might produce a number of singleton clusters with few instances.
the batch size is 100 selected from {25, 50, 100}.
neutral
train_93892
Here, we adopt the complete-linkage criterion, which is more robust to extreme instances.
it is worth noting that these features are only used by baseline models.
neutral
train_93893
Through multiple rounds of interactions between the primal and dual graphs, RDGCN can effectively incorporate more complex relation information into entity representations and achieve promising results for entity alignment.
following the previous works Wang et al., 2018b;Sun et al., 2018), we use 30% of the pre-aligned entity pairs as training data and 70% for testing.
neutral
train_93894
Recently, embedding-based entity alignment methods were proposed to reduce human involvement.
we first pre-train the entity alignment model (Section 4.2) until its entity alignment performance has converged to be stable.
neutral
train_93895
For WDtext, Con-Mask is a strong baseline and has a better result on Hits@10 and Hits@5, but it performs worse on Hits@1 and MRR compared to our framework.
recently, temporal convolution and attention have been considered to learn a common representation and pinpoint common experiences (Mishra et al., 2018).
neutral
train_93896
To be more precise, a large proportion of relations have only a few facts in KGs.
to be more precise, a large proportion of relations have only a few facts in KGs.
neutral
train_93897
Besides, the results of the two ablation baselines are significantly worse than our overall framework, and thus we can see both components play an important role in our framework.
existing works using textual descriptions have not tackled this issue effectively.
neutral
train_93898
NN-PCRFs model Shang et al., 2018).
maximizing the potential of massive noisy data as well as highquality part, yet being efficient, is challenging.
neutral
train_93899
We propose three features for non-entity samples: nearby entities (f 1 ), ever within entities (f 2 ) and term/document frequency (f 3 ).
they are usually limited by the extra knowledge resources that are effective only in specific languages or domains.
neutral