id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_98100
We set ς to 22 on Newsela, 33 on WikiSmall, and 77 on WikiLarge.
we randomly selected 40 sentences from each test set, and included human reference simplifications and corresponding simplifications from the systems above 2 .
neutral
train_98101
In the Story Cloze Test, a system is presented with a 4-sentence prompt to a story, and must determine which one of two potential endings is the 'right' ending to the story.
in the absence of further context, a default prior is assumed -as implicitly encoded in skip-thought vectors trained on BookCorpus -that is often correct.
neutral
train_98102
This only achieves a test-set accuracy of 55.2%.
no Context (nC) This model attempts to identify the 'right' ending of a story by ignoring the story context and looking only at examples of right and wrong endings.
neutral
train_98103
The task is accompanied by the Rochester story (ROCstory) corpus.
we train a feed-forward network using skip-thought embeddings.
neutral
train_98104
We use the standard precision, recall and F 1 metrics to evaluate and compare the perfor-mance.
we present the first steps towards a pragmatic approach based on linguistic annotation ( Figure 3).
neutral
train_98105
The input of most NLP tasks, such as a sentence or a document could be represented as a 2D structure with word embedding (Mikolov et al., 2013).
it shows that the deep learning model can, to some extent learn the humorous meaning and structure embedded in the text automatically without human selection of features.
neutral
train_98106
For the negative samples, we choose WMT16 2 English news crawl as our non-humorous data resource.
their algorithm of the study was based on the extraction of structural patterns and peculiar structure of jokes (taylor and Mazlack, 2004).
neutral
train_98107
The latter models the user with manually selected features.
for example, the Tweet "I'm not sexist but I can not stand women commentators" is actually an instance of hate speech, even though the first half is misleading.
neutral
train_98108
In our approach, we collect the user's historical posts through the Twitter API.
the agent can interactively make selections and update the inter-user representations step by step.
neutral
train_98109
This has also been the tradition in works on parser robustness (Bigert et al., 2005;Foster, 2004).
to address this we propose a semantic RLM, USIM, that operates by measuring the graph distance between the semantic representations of the source and the output.
neutral
train_98110
This domain knowledge is a scalar function that is represented in the form of certain set of rules, easily provided by humans.
in our approach the score function could be any complex non-differentiable function.
neutral
train_98111
In this paper, additional lookup tables and transitions are defined to create concepts when needed, following the current trend (Damonte et al., 2017;Ballesteros and Al-Onaizan, 2017;.
if we predict a multi-sentence graph, but there is no punctuation symbol that splits sentences, we post-process the graph and transform the root node into an AND node.
neutral
train_98112
LEFT and RIGHT-ARC handle cycles and reentrancy with the exception of cycles of length 2 (which only involve i and b).
discussion to related work that relies on ad-hoc procedures, the proposed algorithm handles cycles and reentrant edges natively.
neutral
train_98113
In total, we obtained nine affective values for 2.2 million words.
(1) We used the 15 verb classes from Ger-maNet (Hamp and Feldweg, 1997;Kunze, 2000).
neutral
train_98114
for neural networks).
for example, somebody can participate in two events taking place at different locations only if they do not overlap temporally.
neutral
train_98115
Parameters The intuition behind this model is that word sequence is an important factor affecting the generation of our languages; a word should be biased associated with other words on its left or right side.
to test the effectiveness of the embeddings produced by DSG, we conduct experiments on semantic (word similarity evaluation) and syntactic (part-of-speech tagging) tasks.
neutral
train_98116
Example 11 shows the advantage of combining the vector representation with orthographic distance, i.e., our model could find translations of sleddogs that have similar meaning, while in examples 12 and 13 orthographic distance helped to pick the correct translation which is the closest in terms of edit distance.
throughout the paper, we work with two lexicons.
neutral
train_98117
Accuracies are comparable to previous work (which was on different language pairs).
we design a new task to evaluate bilingual word embeddings on rare words in different domains.
neutral
train_98118
Using the Google dataset GL (Mikolov et al., 2013b) the CosAdd method (Mikolov et al., 2013c) shown in (5).
we make a point of comparison to concatenation since it is the most comparable in terms of simplicity, whilst also providing a good baseline of performance on evaluative tasks.
neutral
train_98119
While antonymy represents words which are strongly associated but highly dissimilar to each other, synonymy refers to words that are highly similar in meaning.
this IAA measure has been a common choice in previous data collections in distributional semantics (Padó et al., 2007;Reisinger and Mooney, 2010;Hill et al., 2015).
neutral
train_98120
Again, the models show a similar behaviour in comparison to SimLex-999, where also the dLCE model outperforms the two other models, and the SGNS model is by far the worst.
antonyms and synonyms often occur in similar context, as they are interchangeable in their substitution.
neutral
train_98121
Dataset Splits Dima (2016) showed that a classifier based only on v w 1 and v w 2 performs on par with compound representations, and that the success comes from lexical memorization (Levy et al., 2015): memorizing the majority label of single words in particular slots of the compound (e.g.
the model was only tested on a small dataset and performed similarly to previous methods.
neutral
train_98122
• Sentence: It is clear that I will never have another prime first experience like the one I had at Chompies.
we experimented with both a Maximum Entropy and a Binomial Naive Bayes binary classifier.
neutral
train_98123
In this work we translated the SVF responses from Brazilian Portuguese to English.
evaluation is carried out at two levels of granularity: a rough-grained classification for the detection of a clinical condition in general (control vs. CI group), and a fine-grained classification for one of the three conditions (aMCD, mMCD and AD groups).
neutral
train_98124
One is a control group with normal cognitive performance, and three are groups with clinical conditions according to assessment guidelines (de Paula et al., 2013;McKhann et al., 1984;Winblad et al., 2004): Amnestic Mild Cognitive Deficit (aMCD), Multi-domain Mild Cognitive Deficit (mMCD) and Alzheimer's Disease (AD).
given a sequence of semantically related words, a large number of switches from one semantic class to another has been linked to clinical conditions.
neutral
train_98125
Our model significantly outperforms the only previous work on sluice resolution on available newswire corpora, but also has a number of advantages over this work.
we have presented a neural architecture for English sluice resolution and shown that it outperforms previous work on sluice resolution.
neutral
train_98126
There are some important problems in the evaluation of word embeddings using standard word analogy tests.
once this assumption is built into systems, we still should put into question various details of the tests.
neutral
train_98127
We find that MERGE is mostly responsible for combining spans of words to form a named entity in English parsing.
evaluation on negation detection.
neutral
train_98128
To this end, we design a neural network architecture, capable of tracking and updating the states of entities at the right time with external memory, making it a natural fit for the task.
when n = 2 (no unconstrained chains), model performance drops substantially to a F 1 of 76.6 ± 0.4.
neutral
train_98129
Results A single NMT model achieves lower performance than the SMT baselines (Table 3).
using consistent training data and preprocessing ( § 2), we first create strong SMT ( § 3) and NMT ( § 4) baseline systems.
neutral
train_98130
We consider various measures of both acoustic-prosodic and lexical entrainment, measuring the latter with a novel application of two previously introduced methods in addition to a standard high-frequency word measure.
it seems unlikely that alternative measures would yield fundamentally different outcomes, such as strong correlations across features.
neutral
train_98131
In this paper, we present ConvKB-an embedding model which proposes a novel use of CNN for the KB completion task.
in these embedding models, valid triples obtain lower implausibility scores than invalid triples.
neutral
train_98132
Online encyclopedias now make vast amounts of information and human knowledge available to internet users around the world in various languages.
then, we calculate the average distance for SCS, US, and MCS.
neutral
train_98133
Entity recognition is a widely benchmarked task in natural language processing due to its massive applications.
there is no gain at all for NAM detection for both languages.
neutral
train_98134
To the best of our knowledge, our work is one of the first applications of LN to any NLP task.
to simulate learning from large texts, we tuned hyper parameters on development, but ran the actual experiments on the train partitions.
neutral
train_98135
Because the gold data does not contain examples involving sports, the baseline system mistakenly identifies a paraphrase of the above sentence as an attack event, and our system is not able to fix that mistake.
this process was repeated on a daily basis between January 2013 and February 2015, resulting in approximately 70 million sentences from 8 million articles.
neutral
train_98136
For example, our system identifies the token shot in Bubba Watson shot a 67 on Friday as an attack event trigger.
we first group together paraphrases of event mentions.
neutral
train_98137
(2017a) and Sun et al.
• We develop the tree-based structure regularization methods and make progress on the task of relation classification.
neutral
train_98138
We attribute the improvements to the simplified structures that generated by structure regularization.
understanding Chinese literature text is of great importance to Chinese literature research.
neutral
train_98139
We established that syntactic patterns can markedly improve the extraction of patient, intervention and outcome descriptions in medical abstracts.
the performance gains could also be due to additional contextual information that bigrams and larger n-grams provide over unigrams alone, rather than their syntactic properties.
neutral
train_98140
As an example of the importance of syntax in encyclopedic definitions, among the definitions contained in the WCL definition corpus (see Section 3.1), 71% of them include the lexico-syntactic To explore the potential of syntactic information, we represent dependency-based phrases by embedding them in the same vector space as the pretrained word embeddings introduced above.
in the early days of DE, rulebased approaches leveraged linguistic cues observed in definitional data (Rebeyrolle and Tanguy, 2000;Klavans and Muresan, 2001;Malaisé et al., 2004;Saggion and Gaizauskas, 2004;Storrer and Wellinghoff, 2006).
neutral
train_98141
This is the idea of our dynamic oracle, which therefore is a correct dynamic oracle only with respect to a preset criterion for plane assignment, and not for all the possible plane assignments that would produce the gold dependency structure.
we present an efficient dynamic oracle to train the 2-Planar transition-based parser, which is correct with respect to a given plane assignment, and results in notable gains in accuracy.
neutral
train_98142
2 Training transition-based parsers In transition-based parsing (Nivre, 2003), dependency trees are built incrementally, in a shiftreduce manner: to parse a sentence, a sequence of transitions (the derivation) is applied on the internal state of the parser (the configuration), consisting typically in a stack and a buffer of unprocessed words.
formally, if each configuration is associated with a UAS max , the maximum UAS value that can be achieved by any of its successor derivations, then the action cost is defined as the differ-ence between the current UAS max and the future UAS max (once the corresponding action has been applied).
neutral
train_98143
1 Faster exact inference algorithms have been defined for some sets of mildly non-projective trees (e.g.
we present three novel variants of the degree-2 Attardi parser, summarized in Fig.
neutral
train_98144
We selected those VMWEs whose frequency in TrC was higher than 9, i.e.
6) and variant identification (Sec.
neutral
train_98145
While their corpus frequency is relatively high at levels 0, 1 and 5, it is low at levels 2, 3 and 4.
2 shows the distribution of the resulting set S of 18 VMWEs into Tutin's levels.
neutral
train_98146
The pragmatic listener L 0 is then defined in terms of this literal agent and a prior P (w) over possible images: The pragmatic speaker S 1 is then defined in terms of this pragmatic listener, with the addition of a rationality parameter ↵ > 0 governing how much it takes into account the L 0 distribution when choosing utterances.
in our incremental RSA, speaker models take both a target image and a partial caption pc.
neutral
train_98147
Furthermore, to prevent word replacement that is incompatible with the annotated labels of the original sentences, we retrofit the LM with a label-conditional architecture.
", which is annotated with a positive label.
neutral
train_98148
Contextual augmentation with a bidirectional RNN language model, when a sentence "the actors are fantastic" is augmented by replacing only actors with words predicted based on the context.
a word in the translated sentence is also replaced using a word alignment method and a rightward LM.
neutral
train_98149
We also subsample German and French data to be equivalent to the size of Swahili, in order to compare training size effects.
the U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
neutral
train_98150
For the in-language case, we observe the cosine model outperforms the deep model.
compared to the modular approach, direct modeling is advantageous in that it focuses on learning translations that are beneficial for retrieval, rather than translations that preserve sentence meaning/structure in bitext.
neutral
train_98151
We consider PropBank-style (Palmer et al., 2005) semantic role structures, or more specifi- cally their dependency versions (Surdeanu et al., 2008).
there are three possible directions (dir(u, v) ∈ {in, out, loop}): self-loop edges were added in order to ensure that the initial representation of node h v directly affects its new representation h v .
neutral
train_98152
The forward RNN(x 1:t ) represents the left context of word x t , whereas the backward RNN(x n:t ) computes a representation of the right context.
in this work, we aim to fill this gap by showing how information about predicate-argument structure of source sentences can be integrated into standard attentionbased NMT models (Bahdanau et al., 2015).
neutral
train_98153
Note that, when a translation is generated in the READ operation, the already committed target words remain unchanged, i.e.
generating translations for such incomplete sequences presents a considerable challenge for machine translation, more so in the case of syntactically divergent language pairs (such as german-English), where the context required to correctly translate a sentence, appears much later in the sequence, and prematurely committing to a translation leads to significant loss in quality.
neutral
train_98154
Problem: In a stream decoding scenario, the entire source sequence is not readily available.
we were not able to improve upon our incremental decoder, with the results deteriorating notably.
neutral
train_98155
It is also interesting to note that the performance of the baseline systems on out-of-domain data in both data conditions is the same, indicating that the in-domain data does not really help for the out-of-domain set.
a general domain system has been trained and tuned to offer general domain translation, but it needs to be adapted to specific domains in order to provide better quality.
neutral
train_98156
We do not have access to aligned source words for gold constraints.
since there is no correspondence between constraints and the source words they cover, correct constraint placement is not guaranteed and the corresponding source words may be translated more than once.
neutral
train_98157
Here we compare the results of InferSent (Conneau et al., 2017), a more involved representation that was found to provide a good sentence representation based on NLI data.
lorem ipsum dolor sit amet, consectetuer adipiscing elit.
neutral
train_98158
We evaluate the model's performance using BLEU metric (Papineni et al., 2002).
the languages in each pair are similar in vocabulary, grammar and sentence structure (Matthews, 1997), which controls for language characteristics and also improves the possibility of transfer learning in multi-lingual models (in §7).
neutral
train_98159
The numbers in the figure, separated by a slash, indicate how many times each n-gram is generated by each of the two systems.
interestingly, BE → EN does not seem to benefit from pre-training in the multilingual scenario, which hypothesize is due to the fact that: 1) Belarusian and Russian are only partially mutually intelligible (Corbett and Comrie, 2003), i.e., they are not as similar; 2) the Slavic languages have comparatively rich morphology, making sparsity in the trained embeddings a larger problem.
neutral
train_98160
Finally, it is of interest to consider pre-training in multilingual translation systems that share an encoder or decoder between multiple languages (Johnson et al., 2016;Firat et al., 2016), which is another promising way to use additional data (this time from another language) as a way to improve NMT.
( §6) Q5 Do pre-trained embeddings help more in multilingual systems as compared to bilingual systems?
neutral
train_98161
We additionally performed pairwise comparisons between the top 10 n-grams that each system (selected from the task GL → EN) is better at generating, to further understand what kind of words pre-training is particularly helpful for.
we can postulate that having consistent embedding spaces across the two languages may be beneficial, as it would allow the NMT system to more easily learn correspondences between the source and target.
neutral
train_98162
Training the parameters of this latent-variable model will recover the posterior distribution over analyses of a form, p θ (t, , s | f ), which allows us to disambiguate counts at the type level.
the syncretic forms are bolded and colored by ambiguity class.
neutral
train_98163
We used perplexity on development data to jointly choose 2 Our vocabulary and parameter set are determined from the lexicon.
(2) has 1 hidden layer, but we actually generalized this to consider networks with k ∈ {1, 2, 3, 4} hidden layers of d = 100 units each.
neutral
train_98164
Together, these results show that there is significant potential for follow up work on developing innovative uses of QAMR and modeling their relatively comprehensive and complex predicate-argument relationships.
finally, we report simple neural baselines for QAMR question generation and answering.
neutral
train_98165
We also add negative samples where the output is a special token and the input has w q , w a that never appear together.
we differ from QA-SRL in focusing on all words in the sentence rather than just verbs, and allowing free form questions instead of using templates.
neutral
train_98166
To define a "community," volunteers manually selected a set of users that fit with a theme that they had familiarity with.
using agglomerative clustering, we found groups of words that centered around frequent words used in particular regions (foreign words, dialects) or cultures (sociolects), associated with hobbies or interests (specific sports, music genres, gaming), or polarizing topics (political parties, controversial issues).
neutral
train_98167
Another problem that could be formulated is one in which the output summary is generated from multiple tables as proposed in a recent challenge (Wiseman et al., 2017) (this setting is out of the scope of this paper).
in addition to the standard BLEU (sBleu) (Papineni et al., 2002), a customized BLEU (cBleu) (Mei et al., 2016) has also been reported.
neutral
train_98168
As future work, we propose to tackle general tabular summarization where the schema can vary across tables in the whole dataset.
mixed hierarchical attention model is faster than fully dynamic hierarchical attention.
neutral
train_98169
Another alternative is update summarization, in which the summary should cover content in one set of documents, but not in another set that the user has already read (Dang and Owczarzak, 2008).
we call these facts targeted information units (TIU), because they are the pieces of information that must be part of the summary for the given information type.
neutral
train_98170
One possibility is that simply encouraging workers to spend more time writing their summaries improved performance, but fitting a linear model we find the correlation coefficient between time and F 1 is −0.06, indicating no linear correlation between time and accuracy across conditions.
table 1: Performance for a range of metrics (defined in § 4.4) as the number of targeted information units and the condition vary (b: Baseline, h: Highlight, p: Pin-Refine).
neutral
train_98171
We propose "targeted summarization" as an umbrella category for summarization tasks that intentionally consider only parts of the input data.
summaries for query-based summarization only cover parts of the text that are about the topic specified by a query (Rahman and Borah, 2015).
neutral
train_98172
"aktorino", "aktoro").
in this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata.
neutral
train_98173
This research is partially supported by the Answering Questions using Web Data (WDAqua) project, a Marie Skłodowska-Curie Innovative Training Network under grant agreement No 642795, part of the Horizon 2020 programme.
with these works, we explore the generation of sentences in an open-domain, multilingual context.
neutral
train_98174
4), and the scores are 27.95%, 28.00%, 28.80%, and 30.86%.
every summary has salient tokens which should be rewarded with more weight.
neutral
train_98175
For simplicity, we report the average over the 3 datasets.
the number of data points is relatively low and the learned θ might not be well-behaved (high θ scores for bad summaries) pushing the optimizer to explore regions of the feature space unseen during training where θ wrongly assumes high scores.
neutral
train_98176
Since UD focuses on the relations between content words, UD triples are able to represent key components of sentences more directly.
for example, numbers tend to be put in the same clusters since our word embeddings place them close to each other.
neutral
train_98177
To do this, we use a particular graph structure, called multipartite graph, to represent documents as tightly connected sets of topic related candidates (see Figure 1b).
unfortunately, these models suffer from several limitations: they aggregate multiple topic-biased rankings which makes their time complexity prohibitive for long documents 1 , they require a large dataset to estimate word-topic distributions that is not always available or easy to obtain, and they assume that topics are independent of one another, making it hard to ensure topic diversity.
neutral
train_98178
TopicRank obtains the highest precision among the baselines, suggesting that its -one keyphrase per topic-policy succeeds in filtering out topic-redundant candidates.
among the features proposed to address this problem in the literature, the position of the candidate within the document is most reliable.
neutral
train_98179
(2014) who attempt to understand the various dimensions that experts and non-experts consider while judging narrative similarity.
since some movies have multiple remakes, we obtain clusters of movie plots, each of which share the same narrative theme.
neutral
train_98180
Emojis can encode different meanings, and they can be interpreted differently.
fastText represents a valid approach when dealing with social media content classification, where huge amounts of data needs to be processed and new and relevant information is continuously generated.
neutral
train_98181
The network we used is composed of 101 layers (ResNet-101), initialized with pretrained parameters learned on ImageNet (Deng et al., 2009).
by modeling the semantics of emojis, we can improve highly-subjective tasks like sentiment analysis, emotion recognition and irony detection (Felbo et al., 2017).
neutral
train_98182
Previous research such as (Fernández-González and Gómez-Rodríguez, 2012) and (Qi and Manning, 2017) proves that the widely-used projective arc-eager transition-based parser of Nivre (2003) benefits from shortening the length of transition sequences by creating non-local attachments.
it consists in a modification of the non-projective Covington algorithm where: (1) the Left-Arc and Right-Arc transitions are parameterized with k, allowing the immediate creation of any attachment between j and the kth leftmost word in λ 1 and moving k words to λ 2 at once, and (2) the No-Arc transition is removed since it is no longer necessary.
neutral
train_98183
The resulting parser outperforms the original version and achieves the best accuracy on the Stanford Dependencies conversion of the Penn Treebank among greedy transition-based parsers.
there are still situations where sequences of No-Arc transitions are needed.
neutral
train_98184
Language changes serve as a sign that a patient's cognitive functions have been impacted, potentially leading to early diagnosis.
each of these transcripts were then broken down by sentences and interruptions by the interviewer.
neutral
train_98185
(2016) adopted a deep neural network language model.
instead, words that give structure to a sentence have the biggest impact on classification, such as definite articles and determiners (e.g., "the" and "that").
neutral
train_98186
We focus on two neural encoder-decoder models for spelling normalization, comparing them against the memorization baseline and to previous results from Pettersson et al.
we conclude that future work should include more rigorous evaluation, including both intrinsic and extrinsic measures where possible.
neutral
train_98187
More models adopt this thought to enhance the performance (Liu et al., 2015(Liu et al., , 2016.
gated mechanism in existing studies not only shows its convenience for training (Srivastava et al., 2015), but also behaves as tool to route the information (He et al., 2016).
neutral
train_98188
For all the experiments, we employ Word2Vec (Mikolov et al., 2013) to initialized the word vectors, which is trained on Google News with 100 billion words.
under the scheme of proportional addition (Ruder et al., 2017;Misra et al., 2016), all the features are shared with the same weight between every pair of tasks.
neutral
train_98189
The specific semantic parsing problem we study in this work is to map a natural language question to a SQL query, which can be executed against a given table to find the answer to the original question.
we evaluate our model on the wikiSQL dataset (Zhong et al., 2017).
neutral
train_98190
So, in the one-dimensional case, the transformations that a word vector can do are restricted to translation.
the best variation learns unique embeddings for only the most frequent words and uses hard clustering for the rest.
neutral
train_98191
This is likely due to the fact that SM was trained on the SALICON dataset; a higher correlation can probably be achieved by fine-tuning the salience model on the PASCAL Actions Fixation data.
(2016) report for their question answering ask, indicating that verb classification is a task that can be performed more reliably than Das et al.
neutral
train_98192
If our models cannot localize and color the appropriate object, workers will be unable to select an appropriate image.
boyd-Graber is supported by NSF Grant IIS-1652666.
neutral
train_98193
Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions.
some descriptions ( Fig.
neutral
train_98194
Surprisingly, we find that models are robust to shuffling the word order and limiting the word categories to nouns and adjectives.
there has been increasing interest in modeling natural language in the context of a visual grounding.
neutral
train_98195
GT+P results show that inter-modality constraints help in improving the results (about 2% F1) which indicates some of the visual relations successfully confirmed and boosted their corresponding relations in the text modality.
in the following we describe the information that we use from each modality (i.e.
neutral
train_98196
In order to validate the superiority of the deep audio features in video captioning, we illustrate the performance of different audio features applied in the CM-ATT model in Table 2.
more specifically, we utilize the ResNet model for image classification (He et al., 2016) and the VGGish model for audio classification .
neutral
train_98197
As shown in Figure 2, different from a stacked two-layer LSTM, the high-level LSTM here operates at a lower temporal resolution and runs one step every s time steps.
in formula, o e H j , h e H j = e H (f e H j , h e H j−1 ) , where e H denotes the high-level LSTM whose output and hidden state at j are o e H j and h e H j .
neutral
train_98198
The same applies to named entity recognition: State-of-the-art systems are combinations of neural networks such as LSTMs or CNNs and conditional random fields (CRFs) (Strauss et al., 2016).
furthermore, Table 4 also presents the results of the NER models making use of the typeaggregated features instead of token-level gaze features.
neutral
train_98199
On one hand, the quality improvement attributed to eye movement signals on lower-level tasks implies that such signals do contain linguistic information.
the best improvements on F 1score over the baseline models are significant under one-sided t-tests (p<0.05).
neutral