id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_97900
Next, the relevant parts of the input are identified in F , and afterwards they are used by G to update the memory.
memory networks are exactly designed to remember previous information.
neutral
train_97901
After a set of preliminary experiments, we select the beam size from {3, 4, 5, 6}.
less expressive models may be preferred as they may learn more abstract representations that will generalize better to outof-domain data.
neutral
train_97902
We replace the appearance of a word with its refined word vector.
we will base our semantic distance δ on a slight variation of the DESM similarity metric.
neutral
train_97903
The results of the LogMap system show a quite similar behavior with the experiments conducted in the conference dataset.
the basic idea is that some alignments that were omitted by the Stable Marriage solution were very close to the optimal alignment and they should also be included in the final alignment set.
neutral
train_97904
In any case, however, we retain a small precision which indicates a semantic similarity and conceptual association coalescence.
despite the use of refined word vectors, we cannot completely avoid the problems that come from the semantic similarity and conceptual association coalescence.
neutral
train_97905
In this section, we introduce our proposed architecture for RNNs.
we report BLEU score results together with number of parameters of recurrent layers.
neutral
train_97906
The model is almost same as the one used in De-En machine translation, which is a two layers RNNSearch model, except that the embedding size is 512, and the LSTM hidden size in both encoder and decoder is 512.
it fails to capture dependency cross different groups.
neutral
train_97907
The group recurrent layer in Equation 4 and 5 can be re-formulated as From the reformulation, we can see group recurrent layer is equivalent to standard recurrent layer with block-diagonal sparse weight matrix.
training such big models usually takes days or even weeks of time even if using tens of GPU cards.
neutral
train_97908
mutations beyond adding "not" to the original).
of these, 28.51% (n = 274) had the correct predicted label but did not satisfy the requirements for evidence.
neutral
train_97909
Vlachos and Riedel (2014) constructed a dataset for claim verification consisting of 106 claims, selecting data from fact-checking websites such as PolitiFact, taking advantage of the labelled claims available there.
for instance, it would be interesting to test how approaches similar to natural logic inference (Angeli and Manning, 2014) can be applied, where a knowledge base/graph is constructed by reading the textual sources and then a reasoning process over the claim is applied, possibly using recent advances in neural theorem proving (Rocktäschel and Riedel, 2017).
neutral
train_97910
We partitioned the annotated claims into training, development and test sets.
despite the rising interest in verification and fact checking among researchers, the datasets currently used for this task are limited to a few hundred claims.
neutral
train_97911
By the end of this section a bipartite relation graph like Figure 2 will be constructed, with one node set being textual relations T , and the other being KB relations R. The edges are weighted by the normalized co-occurrence statistics of relations.
for the embedding model, the mini-batch size is set to 128, and the state size of the GRU cells is 300.
neutral
train_97912
Similar to previous work (Riedel et al., 2010;Zeng et al., 2015), we use two settings for evaluation: (1) Held-out evaluation, where a subset of relational facts in KB is held out from training (Table 1), and is later used to compare against newly discovered rela- tional facts.
the results in Figure 6 show that pairwise ensemble of existing relation extraction models does not yield much improvement, and GloRE brings much larger improvement than the other models.
neutral
train_97913
We want the coherence score between e c and e p to be close to 1, while the score for e c and e n should be close to 0.
this task is closely related to our argument cloze task.
neutral
train_97914
For example, given three nodes, e1, e2, and e3, a local method can possibly classify (e1,e2)=before, (e2,e3)=before, and (e1,e3)=after, which is obviously wrong since before is a transitive relation and (e1,e2)=before and (e2,e3)=before dictate that (e1,e3)=before.
time is an important dimension of knowledge representation.
neutral
train_97915
In practice, e.g., in the TBDense dataset , roughly 30%-40% of the P-Before pairs are T-After.
the tempRels between events can be represented by an edge-labeled graph, where the nodes are events, and the edges are labeled with tempRels (Chambers and Jurafsky, 2008;Do et al., 2012;Ning et al., 2017).
neutral
train_97916
In Example 2, where we show the complete sentences, the task has become much easier for humans due to our prior knowledge, namely, that explosion usually leads to casualties and that people usually ask before they get help.
very limited attention has been paid to generating such a resource and to make use of it; to our knowledge, the TEMPROB proposed in this work is completely new.
neutral
train_97917
enough information to disambiguate it.
"Marshmellooooo" with trailing 'o's) and amplifies character-level features intsead (e.g.
neutral
train_97918
At each token t and for each gold label at the previous step g k t−1 , our network takes the hidden representation from the previous layer z from the previous time step, and computes: Unlike the encoder LSTM, this decoder LSTM is single-directional and bifurcates when multiple gold labels are present.
gold-standard annotations are shown in red.
neutral
train_97919
Each of these baselines uses mention-specific features encoding relative position of each token to the two target entities being classified, whereas our model aggregates over all mention pairs in each sentence.
this could be ameliorated by integrating our model into open relation extraction architectures such as Universal Schema (Riedel et al., 2013;Verga et al., 2016b).
neutral
train_97920
These baselines allow us to compare the representation power independent of the classifier.
we use our prepositional representations as simple features to a standard classifier on this task.
neutral
train_97921
It is a variant of the popular particle filtering method that tracks the state of a physical system in discrete time (Ristic et al., 2004).
this suggests that an accurate HMM language model p(x) would require very large kas would a generative OOHMM model p(x, y) of annotated language.
neutral
train_97922
We investigate the feasibility of neural syntactic generative models with structured latent variables in which exact inference is tractable.
the sentence is encoded left-to-right by an LStM RNN taking the word embedding of the last predicted word as input at each time step, independent of t. the RNN hidden states h 0:n represent each sentence position in its linear context.
neutral
train_97923
In cases of spurious ambiguity arcs are added as soon as possible.
unsupervised training based on this model cannot learn to predict arc directions.
neutral
train_97924
We observe the same pattern for the embedding space mapping approach for noise reduction against the narrow window embeddings.
2 The pipeline so far results in a more consistent version of the text, which we use to learn the final embeddings upon.
neutral
train_97925
Depending on whether this is an example of a zero copula construction, or a clausemodified noun, either annotation is plausible.
authors of this paper manually corrected the parsed data and finally achieved 3,550 labeled tweets.
neutral
train_97926
We witnessed a small number of tweets that contain multi-token words (e.g., Y O, and R E T W E E T) but didn't combine them for simplicity.
when it comes to informal, unedited, user-generated text, the guidelines may leave many annotation decisions unspecified.
neutral
train_97927
So far, not much has been shown about how neural networks are able to compensate for the removal of the structures used in past models.
shuffling does cause performance to degrade relative to the base parser even when the unshuffled win- dow is moderately large, indicating that the LSTM is propagating information that depends on the order of words in far-away positions.
neutral
train_97928
8 As COMBO results show, the representations induced from different corpora are somewhat complementary.
unlike hierarchical softmax, CSS only affects training, that is, at test time we simply use the entire support instead of the approximation.
neutral
train_97929
The unobserved term can be either real zero (two words shouldn't be co-occurred even when we use very large corpus) or just missing in the small corpus.
in the experiments, we evaluate all models with u avg .
neutral
train_97930
Our expanded lexicon, which roughly preserves that ratio, includes about 800 adjectives in total.
there have been investigations examining features on various datasets (Nobata et al., 2016;Samghabadi et al., 2017), however, these studies always trained and tested on the same domain.
neutral
train_97931
Due to space limitations, we cannot list the other classifiers.
many of our features should also be applicable to other languages.
neutral
train_97932
The results from our statistical analysis validate our original hypothesis that power relations do correlate with the level of commitment people express in their messages.
since verbosity of a participant can be highly correlated with each of these feature values (we found it to be highly correlated with subordinates (Prabhakaran and Rambow, 2014)), we added token count as a control variable to the linear regression.
neutral
train_97933
These approaches are motivated from an information extraction perspective, for instance in aiding tasks such as knowledge base population.
for example, let us consider two sentences: I need the report by tomorrow vs.
neutral
train_97934
Figure 1 pictorially demonstrates these results by plotting the difference between the mean values of each commitment feature (here normalized by token count) of superiors vs. subordinates, as a percentage of mean feature value of the corresponding commitment feature for superiors.
tHR thread structure (e.g., reply rate) DIA Dialog act tagging (e.g., request count) ODP Overt displays of power LEX Lexical ngrams (lemma, POS, mixed ngrams) None of the features used in POWERPREDIC-tOR use information from the parse trees of sentences in the text in order to accurately obtain the belief labels, deep dependency parse based features are critical (Prabhakaran et al., 2010).
neutral
train_97935
This approach supports explanations based on interpretable features (e.g., words) even when the underlying representation may be less interpretable.
we measure local fidelity by deleting words in the order of their estimated importance for the prediction.
neutral
train_97936
For example, compare the random approach with the omission approach on true positives with ten word explanations.
it is unclear to what extent they correspond with human-based evaluations.
neutral
train_97937
We compute cosine similarity of the key-terms defined for each selected topic and topics discovered by the topic models over the years.
the lower bound on the log likelihood of the data takes the form: Year 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 total ACL 58 73 250 83 79 70 177 112 134 134 307 204 214 243 270 349 227 398 331 3713 EMNLP 15 24 15 36 29 21 42 29 58 28 75 132 115 164 125 149 140 206 228 1756 ACL+EMNLP 73 97 265 119 108 91 219 141 192 162 382 336 329 407 395 498 367 604 559 5469 (Hinton et al., 2006), adding an extra layer improves lower bound on the log probability of data, we introduce the extra layer via RSM biases that propagates the prior via RNN connections.
neutral
train_97938
Following monolingual coherence evaluations (Lau et al., 2014), we present topic pairs to bilingual CrowdFlower users.
as expected, CNPMI outperforms INPMI regardless of reference corpus overall, because INPMI only considers monolingual coherence.
neutral
train_97939
After training an estimator (Section 4.2), we calculate Pearson's correlation between Wikipedia's CNPMI and the estimated topic coherence score (Table 3).
topic models are often used as a feature extraction technique for downstream machine learning applications, and topic model evaluations should reflect whether these features are useful (Ramage et al., 2009).
neutral
train_97940
Crosslingual Gap (GAP) A low CNPMI score could indicate a topic pair where each language has a monolingually coherent topic but that are not about the same theme (Topic 6 in Figure 1).
hao has been supported under subcontract to Raytheon BBN Technologies, by DARPA award hR0011-15-C-0113.
neutral
train_97941
Prior work has also em-phasized explainability.
we remove those note-label pairs for which no -gram has a score greater than 0, which gives an "unfair" advantage to this baseline.
neutral
train_97942
This fact limits the usefulness of these tools for problems involving styles of language not represented in large annotated training sets.
existing NLI datasets like SNLI have facilitated substantial advances in modeling, but have limited headroom and coverage of the full diversity of meanings expressed in english.
neutral
train_97943
All three models achieve accuracy above 80% on the SNLI test set when trained only on SNLI.
in the mixed setting, we use the full MultiNLi training set and randomly select 15% of the SNLi training set at each epoch, ensuring that each available genre is seen during training with roughly equal frequency.
neutral
train_97944
We do not report all these results for brevity and clarity of presentation.
each model uses a different strategy to exploit the hierarchical relationships encoded in these constraints (their approaches are discussed in Section 5).
neutral
train_97945
Specifically, we use BiLSTMs layers with 512 units each.
we also demonstrate that the proposed parser achieves state-of-the-art performance in the downstream tasks of Parsing Evaluation using Textual Entailments (PETE) and Unbounded Dependency Recovery.
neutral
train_97946
In this case, UDR would pick the relation (what, hope, pobj).
for this reason, when a lemma of a verb is a non-be copula, 10 we add arcs involving the word to the copula adjoining into the copula.
neutral
train_97947
Copulas A copula is usually treated as a dependent to the predicate both in our TAG grammar (adjunction) and UDR.
an example is the UDR relation (those, stayed, nsubj) "in the other hemisphere it is growing colder and nymphs, those who stayed alive through the summer, are being brought into nests for quickening and more growing" where our parser yields (those, alive, 0).
neutral
train_97948
Ideally, the transfer performance could be estimated by training a MNet on task i and directly evaluating it on task j.
adaptive ROBUSTTC-FSL although the RO-BUSTTC-FSL improves over baselines on intent classification, the margin is smaller compared to that on sentiment classification, because the intent classification tasks are more diverse in nature.
neutral
train_97949
Additionally, since the observed entries of Y are generated based on high and low enough performance, it is safe to assume that most observed entries are correct and only a few may be incorrect.
the k-th metric of the cluster is thus to build a predictor M with access to only a limited number of training samples, we make the prediction probability by linearly combining prediction from learned cluster-encoders: where f k is the learned (and frozen) encoder of the k-th cluster, {↵ k } K k=1 are adaptable parameters trained with few-shot training examples.
neutral
train_97950
The analysis complements the evidence, lower resolutions have higher IG at early chunks, whereas higher 6 The only exception to this is R1, which has the lowest overall performance.
late stages require to exploit as much evidence as possible to make accurate predictions.
neutral
train_97951
maining three are used as source domains.
for all three of our experiments, we use = 0.05 and k = 5 (See Algorithm 1).
neutral
train_97952
To make fair comparisons, the previous experiments follow the standard settings in the literature, where the widely adopted Amazon review dataset is used.
each domain has a development set of 200 samples, and a test set of 400 samples.
neutral
train_97953
1 PBLM learns the connection between easy -an adjective that is often used to describe kitchen appliances, but not books -and great.
these alternatives gave worse results.
neutral
train_97954
There also exists an unlabeled set containing large amounts of collected samples without annotation.
the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.
neutral
train_97955
Performance improvements in these more recent models are mainly due to using better image features such as those obtained by Region-based Convolutional Neural Networks (R-CNN), or using reinforcement learning (RL) to directly optimize metrics such as CIDEr, or using more complex attention mechanisms (Gan et al., 2017) to provide a better context vector for caption generation, or using an ensemble of multiple LSTMs, among others.
1, the state updating equations are: pt Here The initial state p 0 is the zero vector.
neutral
train_97956
∀l ∈ [1..L] : e l = P osition Encoder(y c l ) (9) We then put a distribution over the candidate responses conditioned on the summarized dialog history h his (Equation 10).
in Figure 5, the user already mentioned that he/she wants to find a "cheap" restaurant, but the GMN and QRN seem to "forget" this information.
neutral
train_97957
Our RNN units use a different gate arrangement than that used by RAN.
(2017), we construct a templatized set of responses.
neutral
train_97958
Our experiments show that: (1) the encoder with character attention achieves significant improvements over the standard word-based attention-based NMT system and a strong character-based NMT system; (2) incorporating source character information into the decoder by our multi-scale attention mechanism yields a further improvement, and (3) our modifications also improve a subword-based NMT model.
many recent studies have focused on using character-level information in neural machine translation systems.
neutral
train_97959
The standard encoder operates purely on (sub)words or characters.
table 5(a) shows the translation of an OOV word 通 信 业 (tong-xin-ye, telecommunication industry).
neutral
train_97960
(c) Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2).
the decoder layers {z l } follow similar structure, while getting extra representations from the encoder side.
neutral
train_97961
The rest was used as the training set.
when a retrieved source sentence is not very similar with the input sentence (e.g.
neutral
train_97962
With this result in hand, we propose a method for more directly capturing contextual information that may help disambiguate difficult-to-translate homographs.
lastly, we show sample translations of the baseline system and our proposed model.
neutral
train_97963
In this work, we take the policy gradient training strategies following .
to their approach, we apply the CNN-based discriminator for the machine translation task.
neutral
train_97964
The basic idea is to augment the MTL training objective with additional terms, so that the identity of a task cannot be predicted from its data items by the representations resulted from the shared encoder/decoder RNN layers.
nMT is notorious for its need for large amounts of bilingual data (Koehn and Knowles, 2017) to achieve reasonable translation quality.
neutral
train_97965
Implementation We compare the proposed models with two strong baselines from SMT and NMT: • Moses (Koehn et al., 2007): an open source phrased based translation system with default configuration.
for the MAP feeding style, we optimize u i according to the loss function in Eq.
neutral
train_97966
(2017) enhance attention through a planning mechanism.
there is a significant difference between our approach and their approaches.
neutral
train_97967
In some languages (e.g., Hungarian, in the top left) Lematus performs very well on unseen words even with little training data, while in others (e.g., Arabic, along the bottom) it performs poorly despite relatively large training data.
swim) given one of its inflected variants (e.g.
neutral
train_97968
(2017) used their model for machine translation, while we work on lemmatization.
when making predictions we use beam-search decoding with a beam of size 12.
neutral
train_97969
For example, one might expect that for languages with more training data, the system would learn better generalizations and lemmatization accuracy on unseen words would be higher.
evaluation To evaluate models, we use test and development set lemmatization exact match accuracy.
neutral
train_97970
Lemming (which proves to be the strongest baseline) also consists of two log-linear components (a classifier for lemmatization and a sequence model for tagging), which are combined either using a pipeline (first tag, then lemmatize) or through joint inference.
we reduced the number of hidden units to 100 and the encoder and decoder embedding size to 300.
neutral
train_97971
Some years later, many researchers incorporated machine learning algorithms to their systems, but there was still a strong dependency on external resources and domain-specific features and rules (Tjong Kim Sang and De Meulder, 2003).
the most important differences of our approach and previous works are i) the use of phonetics and phonology (articulatory) features at the character level to model SM noise, ii) consistent BLStMs for character and word levels, iii) the segmentation and categorization tasks, iv) a multitask neural network that transfers the learning without using lexicons or gazetteers, and v) weighted classes to handle the inherent skewness of the datasets.
neutral
train_97972
The surface form F1 metric supports that intuition as well.
we emphasize that both datasets present significantly different challenges and, thus, some relevant aspects in CoNLL 2003 may not be that relevant in the WNUT 2017 dataset.
neutral
train_97973
Furthermore, observe that even our vanilla model (LSTM) significantly outperforms the equivalent "FOR-WARD" models by (Gangal et al., 2017) 12 that use greedy and beam-search decoding (1.90 and 2.37 vs 1.75).
such word formation patterns are also evident in other languages (Štekauer et al., 2012) and it is an open question as to whether similar models generalize to other languages as well.
neutral
train_97974
Since multiple sequences of abstract morphemes may in general give rise to a single output form, 1 we marginalize these, i.e., where GEN(a) gives the surface word form produced from the morpheme sequence a.
to these previous works, we demonstrate the utility of incorporating morphological information in these open-vocabulary models.
neutral
train_97975
One reason is that our model The sparse instances on the high nested levels could be another reason that resulted in the zero performances on the last flat NER layer.
cRFs are used to globally predict label sequences for any given sequences.
neutral
train_97976
Additionally, 17% of the errors belong to type error.
sophia Ananiadou acknowledges BBsRC BB/P025684/1 Japan Partnering Award and BB/M006891/1 Empathy.
neutral
train_97977
In deep learning models, such purpose is often achieved with a soft attention mechanism.
it is noteworthy that DR-BiLSTM (Single) performs better than ESiM in more frequent categories.
neutral
train_97978
However, the model would suffer from the absence of similarity and closeness measures.
u i ) and its context depending on the other sentence (e.g.
neutral
train_97979
State-of-the-art FrameId systems rely on pretrained word embeddings as input (Hermann et al., 2014).
we assign sentences to 100 buckets based on their IDs and create a 70/15/15 split for training, development, and test sets based on the bucket order.
neutral
train_97980
Regarding FrameId, to the best of our knowledge, multimodal approaches have not yet been investigated.
the initial situation for the German case is more difficult.
neutral
train_97981
They use a linear program to jointly predict frames and arguments at test time.
in semantic dependencies, the head of an arc is analogous to the target in frame semantics, the destination corresponds to the argument, and the label corresponds to the role.
neutral
train_97982
The cross-task score is given by The computation of s c is described in §4.4.
this leads to potentially very slow inference.
neutral
train_97983
Gold supervision for frame-semantic parses comes from the FrameNet lexicon and corpus (Baker et al., 1998).
as new annotation efforts cannot be expected to use the same original texts as earlier efforts, the utility of this approach is limited.
neutral
train_97984
They filter out noise and select relevant in-domain examples jointly, using similarities between sentence embeddings obtained from the encoder of a bidirectional neural MT system trained on clean indomain data.
we generate negative examples automatically as described in Section 3, and sample a subset to maintain a 1:5 ratio of positive to negative examples.
neutral
train_97985
We detect semantic divergence by computing the cosine similarity between sentence embeddings in a bilingual space.
candidate negative examples are generated starting from the positive examples {(e i , f i ) ∀i} and taking the Cartesian product of the two sides of the positive examples{(e i , f j )∀i, j s.t.
neutral
train_97986
We optimized content alignment on the development set against manual alignments.
content selection and generation models (base encoder-decoder and MTL) were trained for 20 epochs with the ADAM optimiser (Kingma and Ba, 2014) using a learning rate of 0.001.
neutral
train_97987
Block sizes were set to 40 (base), 60 (MTL) and 50 (RL).
in our scenario, the input data varies from one entity (e.g., athlete) to another (e.g., scientist) and properties might be present or not due to data incompleteness.
neutral
train_97988
Instantiations of this framework include the widely-used attention-based sequenceto-sequence model (Bahdanau et al., 2015), in which f enc and f dec are implemented by an RNN architecture using LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Chung et al., 2014) units.
the prevalent approach to training such a model is to update all the model parameters using all the examples in the training data (over multiple epochs).
neutral
train_97989
other sequence of vectors used to produce output symbols step by step.
in many applications and for many natural language datasets, there exist multiple underlying distributions, characterizing a variety of language styles.
neutral
train_97990
First, the amount of training data in these two languages is smaller than that in English.
although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages.
neutral
train_97991
Intuitively, when a human writes a description from a table she keeps track of information at two levels.
this standard model is too generic and does not exploit the specific characteristics of this task.
neutral
train_97992
This operation is also called the pointer sum attention ).
the final score m for a metric v is obtained by averaging over the test set: (4) Since there are multiple correct answers A, we take the highest scoring answerâ at each instance, as done in Rajpurkar et al.
neutral
train_97993
The Ubuntu dataset, however, has diverse contents related to issues in using the Ubuntu system.
the LtC module provided additional performance improvements when combined with both RDE and HRDE models, as it added latent topic cluster information according to dataset properties.
neutral
train_97994
They also tried using long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), bidirectional LSTM and ensemble method with all of those neural network architectures and achieved the best results on the Ubuntu Dialogues Corpus dataset.
mobile, office, photo, tv/video, accessories, and home appliance as top-level categories, and specific categories like galaxy s7, tablet, led tv, and others are used.
neutral
train_97995
We also consider unsupervised transfer learning where the correct answer to each question in the target dataset is not available.
although transfer learning has been successfully applied to various applications, its applicability to Qa has yet to be well-studied.
neutral
train_97996
For +BOW style models, we take the matrix that compresses the text's word frequency vector, then score each word by computing the l 1 norm of the column that multiplies it, with the intuition that important words are dotted with big vectors in order to be a large component of e. Motivation.
both elicit lexicons by considering learned weights or attentional scores.
neutral
train_97997
Most of this prior work uses lexical, syntactic, discourse, and dialog interactive features (Stab and Gurevych, 2014;Habernal and Gurevych, 2016;Wei et al., 2016), power dynamics (Rosenthal and Mckeown, 2017;Moore, 2012), or diction (Wei et al., 2016) to study discourse persuasion as manifested in argument.
and propose an algorithm for counterfactual inference which bear similarities to our Adversarial Selector (Section 3.2), Imai et al.
neutral
train_97998
The tweet's tokens can draw from 1 of 3 topics, which is good but a bit constraining.
similar to the symptom stage, this stage contributes to false positives because it also occurs with normal routine network problems, not just malicious acts.
neutral
train_97999
Services can have trouble for a variety of reasons, not necessarily DDoS attacks.
a detection system may desire to optimize recall at the expense of precision, thus choosing a lower λ and forcing the system to predict attacks more often.
neutral