id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_98600
Moreover, the statistics only need to be computed once and can be re-used for training many different smaller-corpus RNNLMs.
in the table, the numbers in the column headings indicate the token count of the sequential corpus used to train the regularized methods.
neutral
train_98601
In this case, we train different KB embeddings for different percentages of overlapped relations, and then apply the embeddings into the constraints.
we then discard named entities in text corpus if they are not shown in KB, so that we can directly test the influence of our KB constraint model.
neutral
train_98602
The data is built by aligning Wikidata (Vrandečić, 2012) relations with NYT corpus, as a result of 99 possible relations.
this may be because the way that we put constraints only over sampled sentences in a batch may hurt the regularization of decoder, since sampled unique pairs may be less than sample sentences.
neutral
train_98603
Based on the decoder, we make predictions for each unique entity pair.
we also thank the anonymous reviewers for their valuable comments and suggestions that help improve the quality of this manuscript.
neutral
train_98604
As shown in Figure 1, in the KB, we have entities Pink Floyd, Animals, etc., with some existing relations notable work and has member in the KB.
we use this dataset to compare with previous work directly (Marcheggiani and Titov, 2016).
neutral
train_98605
Also, the errors introduced by round-trip translation are relatively clean, but they represent only a subset of the domain of real-world errors.
we show that models trained exclusively on minimally filtered English wikipedia revisions can already be valuable for the GEC task.
neutral
train_98606
In Table 4, we show an example of iterative decoding in action.
each system may have advantages on specific domains.
neutral
train_98607
Hence, we report results using only this scheme.
the model trained on all data performs best on the JFLEG set, which has a different distribution of errors relative to CoNLL-2014 (Napoles et al., 2017).
neutral
train_98608
found that large-data LSTM behavior reflected this island constraint, with attenuated wh-licensing interactions for complex NPs like (11-b)-(11-c) and for analogous complex NPs involving subject extractions.
the results of this experiment can be seen in Figure 6.
neutral
train_98609
These materials paint a slightly more optimistic picture than the results of Section 4.4 for the RNNG's ability to propagate a gap expectation from a filler down one level of clausal embedding.
we applied the same method, adapting wilcox et al.
neutral
train_98610
A word that achieves minimal loss after swapping with P AD is then selected as the word to be replaced.
furthermore, by iterative adversarial training using our black-box RL agent, we can significantly improve the robustness of the dialogue system.
neutral
train_98611
This leads to several difficulties including 1) How to lead the target agent to a bad state and 2) how to force the target agent to make a wrong decision.
in a competitive negotiation dialogue setting, two agents are negotiating with each other over a set of items.
neutral
train_98612
Also, our agent achieves a relatively high positive advantage rate at 84.45% and 69.35% respectively.
an adversarial sentence or example is not enough to conduct attack in dialogue sys-tems.
neutral
train_98613
First, evaluating the effect of changing the KG on the score of the target fact (ψ(s, r, o)) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of G. Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive.
although CRIaGE is primarily applicable to multiplicative scoring functions [Nickel et al., 2011, Socher et al., 2013, Trouillon et al., 2016, these ideas apply to additive scoring functions [Bordes et al., 2013a, Wang et al., 2014, Lin et al., 2015, Nguyen et al., 2016 as well, as we show in appendix a.3.
neutral
train_98614
Recently, a great attention has been paid on neural networks.
the comparisons show that the proposed model has a greater generalization ability.
neutral
train_98615
Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a given sentence and its predicates.
we leverage a stacked BiLSTM network LST M e to be our encoder.
neutral
train_98616
We measure the speed in terms of sentences per second.
in the case of BiLSTM-based sequence tagging parsers, for a given word w t , the output label as encoded by Gómez-Rodríguez and Vilares (2018) only reflects a relation between w t and w t+1 .
neutral
train_98617
Distances can be used for sequence tagging, providing additional information to our base encoding (Gómez-Rodríguez and Vilares, 2018) The proposed auxiliary tasks provide different types of contextual information.
we propose to learn this decomposed label space through a multitask learning setup, where each of the subspaces is considered a different task, namely task N , task C and task U .
neutral
train_98618
Tables 3, 4 and 5 compare our parsers against the state of the art on the PTB, CTB and SPMRL test sets.
we will evaluate the impact of the different methods intended to perform structured inference ( §3.4).
neutral
train_98619
Named entity recognition (NER) is a common task in Natural Language Processing (NLP), but it remains more challenging in Chinese because of its lack of natural delimiters.
(2018c) leverage character-level BiL-STM to extract higher-level features from crowdannotations.
neutral
train_98620
Both on the development and test sets, our system records significant gain, in comparison to Teran-ishi+17:+ext, on Simple coordination sentences.
this means that the model produces a score based on the end of the left conjunct and the beginning of the right conjunct.
neutral
train_98621
Therefore, it can mistakenly segment coordinations when false sub-coordinators appear in a sentence.
we categorize sentences into the following four groups 5 .
neutral
train_98622
3 Japanese PASA and ENASA Japanese predicate (event-noun) argument structure analysis is a task to extract arguments for certain predicates (event-nouns) and assign three case labels, NOM, ACC and DAT .
we call these nouns that refer to events event-nouns, for example, a verbal noun (sahen nouns) such as houkoku "report" or a deverbal noun (nominalized forms of verbs) such as sukui "rescue."
neutral
train_98623
No dependency relations between the predicate (event-noun) and argument candidates.
in NTC 1.5, if there is a predicate phrase, such as "verbal noun + suru," suru is annotated as a predicate word.
neutral
train_98624
However, Multi-ALL+DEP compared unfavorably with Multi-ALL even though it was the best PASA architecture.
our single model is based on an end-to-end approach (Zhou and Xu, 2015;ouchi et al., 2017;Matsubayashi and Inui, 2018).
neutral
train_98625
(2017) proposed an end-to-end model based on the model using eight-layer bi-directional long shortterm memory (LSTM) proposed by Zhou and Xu (2015) and considered the interaction of multiple predicates simultaneously using a Grid RNN.
the model improved the performance of arguments that have no syntactic dependency with predicates and achieved a state-of-the-art result on Japanese PASA.
neutral
train_98626
According to the description of the dependency tree in Section 3.1, leaf nodes are isolated discourse units and no other discourse units depend on them.
we utilize five available fake news datasets in this study.
neutral
train_98627
In total, we have 3360 fake and 3360 real documents.
the less value of this property is, the more coherent a document is likely to be.
neutral
train_98628
The position of s j in the original sequential order is simply j 2 .
intuitively, the less the value of Property 2 for a document is, the more coherent that document should be.
neutral
train_98629
We report model performance on the test-set using automatic metrics as well as human evaluation.
the detection conditions of the discourse structure, and the sequence of operations for generating a new sentence pair from it.
neutral
train_98630
Sentence fusion is challenging because it requires understanding the discourse semantics between the input sentences.
* Work done during internship at Google AI.
neutral
train_98631
Anaphora and verb phrase coordination are more challenging, but still require matching of the same noun (the named entity or the subject).
thadani and McKeown (2013) constructed 1,858 examples from summarization tasks.
neutral
train_98632
If none of the specificity metrics are included, topic relevance scores improve.
the decoder becomes Zhang et al.
neutral
train_98633
A similar trend can also be observed in Table 8, where LOG-CaD could generate the locational expressions such as "philippines" and "british" given the different contexts.
then, we analyze the impact of global context under the situation where local context is unreliable.
neutral
train_98634
While their method use local context for disambiguating the meanings that are mixed up in word embeddings, the information from local contexts cannot be utilized if the pre-trained embeddings are unavailable or unreliable.
each entry in the dataset consists of (1) a phrase, (2) its description, and (3) context (a sentence).
neutral
train_98635
As future work, we plan to modify our model to use multiple contexts in text to improve the quality of descriptions, considering the "one sense per discourse" hypothesis (Gale et al., 1992).
to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts.
neutral
train_98636
In what follows, we explain existing tasks that are related to our work.
the target of paraphrase acquisition are words/phrases with no specified context.
neutral
train_98637
Very little work adequately exploits unannotated data-such as discourse markers between sentences-mainly because of data sparseness and ineffective extraction methods.
one of the most popular frameworks aims to induce sentence embeddings as an intermediate representation for predicting relations between sentence pairs.
neutral
train_98638
For instance, similarity judgements (paraphrases) or inference relations have been used as prediction tasks, and the resulting embeddings perform well in practice, even when the representations are transfered to other semantic tasks (Conneau et al., 2017).
out of the 42 single word PDTB markers that precede a comma, 31 were found by our rule.
neutral
train_98639
For the local evaluation metric, we mainly consider Vocab@-3% and Vocab@-5%.
nLI datasets involve more complicated reasoning and interaction, which requires a thousand-level vocabulary.
neutral
train_98640
In this paper, we provide a more sophisticated variational vocabulary dropout (VVD) based on variational dropout to perform vocabulary selection, which can intelligently select the subset of the vocabulary to achieve the required performance.
we are interested in understanding how the end-task classification accuracy is related to the vocabulary size and what is the minimum required vocabulary size to achieve a specific performance.
neutral
train_98641
Loss function Ψ(•): First, to improve readability, we introducev w as a short notation of τ (V , w), namelyv w = τ (V , w).
we adopted Adam (Kingma and Ba, 2014) as our optimization algorithm to minimize Eq.
neutral
train_98642
One might be surprised the improved results by KVQ-FH since the model sizes of KVQ-FH were relatively very small comparing with the original embeddings.
here, we consider utilizing a squared loss function Ψ lsq , which can be written as the summation of the squared losses over an individual embedding vector e w : where C w is a weight factor for each word.
neutral
train_98643
We would also like to explicitly quantify the uncertainty captured in our framework under different sampling strategies or MCMC-SG methods (e.g., similar to Mc-Clure and Kriegeskorte (2016); Teye et al.
our BNNP consistently outperforms the single task BiLSTM baseline (Kiperwasser and Goldberg, 2016), while outperforming the BiAFFINE parser (Dozat et al., 2017) by up to 3% on Vietnamese and Irish.
neutral
train_98644
Multi-task Learning (MTL) (Caruana, 1997) is an inductive transfer mechanism which leverages information from related tasks to improve the primary model's generalization performance.
for stage-1, the bandit controller iteratively selects batches of data from different tasks during training to learn the approximate importance of each auxiliary task (Graves et al., 2017).
neutral
train_98645
Once this is specified, we aim to find k adversarial distributions {α (1) , ..., α (k) }, such that each α (i) maximizes the distance from originalα but does not change the output by more than .
this is the exception rather than the rule.
neutral
train_98646
The reward function provided by TextWorld is as follows: +1 for each action taken that moves the agent closer to finishing the quest; -1 for each action taken that extends the minimum number of steps needed to finish the quest from the current stage; 0 for all other situations.
we hypothesize that it will cut down on the amount of exploration needed during testing time, theoretically allowing it to complete quests faster; one of the challenges of text adventure games is that the quests are puzzles and even after training, execution of the policy requires a significant amount of exploration.
neutral
train_98647
We further reduce the applied layers to low-level two (Row 5), the above phenomena still holds.
future work includes combining our information aggregation techniques together with other advanced information extraction models for multihead attention .
neutral
train_98648
To address these problems and to unify previous efforts, in a recent work, Jurgens et al.
2018the task of citation intent classification.
neutral
train_98649
For contextual representations, we use ELMo vectors released by Peters et al.
overall, the size of the data for scaffold tasks on the ACL-ARC dataset is about 47K (section title scaffold) and 50K (citation worthiness) while on SciCite is about 91K and 73K for section title and citation worthiness scaffolds, respectively.
neutral
train_98650
We thank Kyle Lo, Dan Weld, and Iz Beltagy for helpful discussions, Oren Etzioni for feedback on the paper, David Jurgens for helping us with their ACL-ARC dataset and reproducing their results, and the three anonymous reviewers for their comments and suggestions.
suggesting the effectiveness the scaffolds in informing the main task of relevant signals for citation intent classification.
neutral
train_98651
This suggests that while the bulk of the signal is mined from the pair-context interactions, there is also valuable information in other interactions as well.
we must determine the frequency in which each triplet appears in each role.
neutral
train_98652
Data Augmentation results for intent and question classification are shown in Table 5.
we perform experiments with multiple augmentation settings for the following classifiers: 1.
neutral
train_98653
Interestingly, incorporating contextual descriptors did not help the prediction of persuasion strategies.
simi- lar principles might also occur for social identity and scarcity where the use of words such as "we", "our" and "expire", "left" can reveal a lot about the persuasion strategies.
neutral
train_98654
• The principle of Emotion says that making messages full of emotional valence and arousal affect (e.g., describing a miserable situation or a happy moment) can make people care and act, e.g., "The picture of widow Bunisia holding one of her children in front of her meager home brings tears to my eyes..", similar to Sentiment and Politeness used by Althoff et al., (2014) and Tan et al., (2016), and Pathos used by Hidey et al., (2017).
in this context, we did not observe enough instances of them.
neutral
train_98655
We first apply the structural encoder to the input graphs.
the latter recently inspired a graphto-sequence architecture for AMR-to-text generation (Beck et al., 2018).
neutral
train_98656
The trained language models are then applied to the held-out participant's sequence of information units and various language model (LM) features are extracted (Table 2c).
in this task, participants are asked to describe the content of a line drawing of a kitchen scene, where a boy can be seen standing on a stool, trying to reach a cookie jar, while a woman is preoccupied washing dishes.
neutral
train_98657
As shown in Table 5, also in this setting the inclusion of knowledge improves the performance.
when using ELMo with BM we see an improvement in recall.
neutral
train_98658
For example, the token "time" was used on average 0.32 times per document in the Newswire corpus and just over 4 times per document in the THYME corpus (Table 2).
† https://github.com/bethard/anaforatools This section first discusses Chrono's "out-of-thebox" performance on the THYME Evaluation Corpus prior to any code changes.
neutral
train_98659
This phrase is first parsed by the formatted date/time module to identify the HourOfDay "3" and the Minute-OfHour "05" entity.
different domains are expected to differ in their lexicon.
neutral
train_98660
The following sub-sections discuss issues associated with variation in formatted dates, times, and long temporal phrases.
improvements made to Chrono using the THYME Training Corpus lead to a 0.27 and 0.24 increase in precision and recall, respectively, with a 0.26 increase in F1 measure for the Evaluation Corpus (Table 3).
neutral
train_98661
Since Chrono did not consider sentence boundaries, this line break was removed in the preprocessing phase and the "2" that numbers the list item was parsed as a DayOfMonth associated with "December".
chrono specifically includes the period in the span only if there is a period after each letter in strings (e.g.
neutral
train_98662
To train these, we generalize token-wise annotations to sentences such that a sentence is labeled 1 if it contains any evidence tokens, and 0 otherwise.
given the critical role published reports of trials play in informing evidence-based care, organizations such as the Cochrane collaboration and groups at evidence-based practice centers (EPCs) are dedicated to manually synthesizing findings, but struggle to keep up with the literature (Tsafnat et al., 2013).
neutral
train_98663
Concretely, where w α ∈ R 1×d and H a ∈ R d×|a| , denoting hidden size by d and article length by |a|.
the best performing model (and hence current leader) is the variant that uses pretrained, conditional attention, which aligns with the average results in table 2.
neutral
train_98664
To summarize this history, content-aware models (Chen et al., 2016a;Kim et al., 2017) similar to attention models in machine translation (Bahdanau et al., 2014) have been proposed.
the proposed models do not need any manual time-decay function, but learn a timedecay tendency directly by introducing a trainable distance vector, and therefore have good SLU accuracy.
neutral
train_98665
(2018), and Tran et al.
examples of DAs can be found in Table 1.
neutral
train_98666
(Ravi and Kozareva, 2018) bypasses the need for complex networks with huge parameters but its overall accuracy is 4.2% behind our system, despite being 0.2% higher on SwDA.
learning long-range dependencies is a challenge because of noisier and longer path lengths in the network.
neutral
train_98667
This requires the memory reader to learn how to infer relationships across otherwise connected attributes.
the encoder takes a sequence of utterances as input.
neutral
train_98668
The product of first level attention α i , second level attention β ij and third level attention γ ijl gives the final attention score of the value v r ij l in the KB memory.
to store multiple queries, we require 3 levels in our multi-level memory as compared to 2 levels in the other datasets, since they don't have more than one query.
neutral
train_98669
Researchers in this community have attempted to predict the topic directly from the audio signals using phoneme based features.
we evaluate our model on Switchboard (SwBD) corpus (Godfrey et al., 1992) and show that our model supersedes previously applied techniques for topic spotting.
neutral
train_98670
This is another form of doubly-recurrent neural networks.
using a doublyrecurrent architecture, specifically the surface decoder and the ancestor decoder, can improve perplexity scores for top-down word generation over the left-to-right decoder.
neutral
train_98671
It has 2.06M utterances, and we split into training, development and test sets with ratio of 8:1:1.
when we add a new domain embedding s k+1 to this personalization module, the model tends to learn to move this vector to a different part of the vector space such that its easier to distinguish the new domain from all other domains.
neutral
train_98672
In the domain of dialog managers, Mrkšić et al.
for example, for a request such as Set an alarm for tomorrow at 7am, a first step in fulfilling such a request is to identify that the user's intent is to set an alarm and that the required time argument of the request is expressed by the phrase tomorrow at 7am.
neutral
train_98673
We expect the trained model to be such that where SD is the source document, and S i are the the utterances in the conversation.
we have not reported scores for certain multiplicative variants because their performance is significantly worse.
neutral
train_98674
Query Sensitive History Summariser: The history of the prober and responder are passed through a BiGRU to get context sensitive vectors h P j ∈ R 2d and h R k ∈ R 2d for j ∈ [J] and k ∈ [K].
for our simple attention model, the average conicity of the vectors h R and h P , when computed in a similar fashion as mentioned above were generally high (about 0.8) (see Table 6).
neutral
train_98675
Since the script and sequence structure of English is very different from these low-resource languages, the addition of English to the limited target language training data yields a considerably noisy corpus.
we design a new neural architecture which integrates multi-level adversarial transfer into a Bi-LSTM-CRF to improve low-resource name tagging.
neutral
train_98676
The second is based on multitask learning via a weight sharing encoder (Yang et al., 2016Lin et al., 2018).
to explore the impact of the size of target language annotations, we use 0, 10%, 50%, or 100% annotated training data from Amharic.
neutral
train_98677
Since the extracted sentence pairs are only partial translations, incorporating them as they are into the training data for NMT may mislead the training of the model due to their noisiness.
all our NMT systems, including baselines, were the Transformer model (Vaswani et al., 2017) trained with Marian (Junczys-Dowmunt et al., 2018).
neutral
train_98678
Even without an accurate translation model, we still have the possibility of extracting sentence pairs from unrelated source and target monolingual data.
with the harmonic mean F t between c s and c t , the algorithm searches for target sentences containing as many words translating source tokens as possible while penalizing the retrieval of very long target sentences that may also contain many tokens having no counterparts in the source sentence.
neutral
train_98679
Furthermore, we also observed the complementarity of backtr and copy (backtr+copy), with 4.2 and 2.3 BLEU points of improvements for en→de and en→tr, re-spectively, over the baseline system.
marie and Fujita (2018) presented a method for inducing phrase tables from monolingual data using a weaklysupervised framework.
neutral
train_98680
We speculate that the standard phrase table trained only on 18k sentence pairs is not strong enough to bias the extraction of partial translations.
most of them rely on the availability of document-level information, in comparable corpora for instance, and usually for one specific domain, to efficiently extract accurate sentence pairs (Abdul Rauf and Schwenk, 2011).
neutral
train_98681
Our model also outperforms the supervised models in Ukrainian and Latvian.
our experiments on 68 treebanks (38 languages) in the Universal Dependencies corpus achieve a high accuracy for all languages.
neutral
train_98682
3 To show the generality of different methods, we consider European, non-European and low-resource languages.
finally, we summarize our findings with possible future directions in Section 6.
neutral
train_98683
Consider the task of translating for an extremely low-resource language pair.
the adversarial training tries to force the sentence representation generated by the encoder of similar sentences from different input languages to have similar representations.
neutral
train_98684
We choose four language pairs for this purpose: Arabic↔French which are distant languages, and Ukrainian↔Russian which are similar.
one recent exception is Neubig and Hu (2018) who trained many-to-one models from 58 languages into English.
neutral
train_98685
The resulting cross-lingual embeddings can be used to share supervision for lexical classification tasks across languages, when annotated data is not available in one language.
even for our development language German, results suggest that there is room for improvement.
neutral
train_98686
We trained monolingual ELMo and FastText with default parameters.
for monolingual training, we used the 1 Billion Word benchmark (Chelba et al., 2014) for English, and equivalent subsets of ∼400 million tokens from WMT'13 (Bojar et al., 2013) news crawl data.
neutral
train_98687
Intuitively, a sentence is less ambiguous than stand-alone words since the words are interpreted within a specific context, so a mapping learned at the sentence-level is likely to be less sensitive to individual word inconsistencies.
this is an interesting observation and may indicate that contextualized dictionaries result in a more balanced mapping, while context-independent embeddings overfit the mapping to the specific direction used for alignment.
neutral
train_98688
Intuitively, a sentence is less ambiguous than stand-alone words since the words are interpreted within a specific context, so a mapping learned at the sentence-level is likely to be less sensitive to individual word inconsistencies.
using such words in the alignment dictionary may result in suboptimal overall mapping.
neutral
train_98689
These mixed results suggest that while crosslingual transfer in neural network models is a promising direction, the best blend of polyglot and language-specific elements may depend on the task and architecture.
we also draw a sharp distinction between multilingual and polyglot models.
neutral
train_98690
When training a multilingual parser, it could be interesting to explicitly represent these parameters, and to integrate them into the parsing model.
we would like to use multilingual word embeddings to make lexical information accessible to the parser, making it more realistic.
neutral
train_98691
of the positively labeled sentences.
our model extracts sentences from a given document and further compresses these sentences by deleting words.
neutral
train_98692
Our model selects a set of sentences from the input document, and compresses them by removing unnecessary words, while keeping the summaries informative, concise and grammatical.
abstractive systems require natural language generation and semantic representation, problems that are inherently harder to solve than just extracting sentences from the original document.
neutral
train_98693
An increase in the proportion of uncommon words makes the models also generate uncommon words, which are not likely to match the ground-truth, thereby reducing the recall.
evaluation Metrics: Studies on textsummarization evaluate their system using Rouge; therefore, we report Rouge-1 (unigram), Rouge-2 (bigram), and Rouge-L (longest-common substring) as the quantitative evaluation of models on our corpus.
neutral
train_98694
We then sample 255k instances that have all associated short texts in them.
click-bait, a ShortcutText, contains mostly elements that create anticipation, thereby making a reader click on the link; however, the reader comes to regret their decision when the story does not match the clickbait's impression (Blom and Hansen, 2015).
neutral
train_98695
For example, as shown in Figure 1 and assuming that stemming is performed, the verse Isaiah 25:8 would be properly linked to 1 Corinthians 15:54 due to the two verses sharing the terms 'death' and 'swallow'.
but with such a father and mother, and such low connections, I am afraid there is no chance of it."
neutral
train_98696
We also examine what our model got wrong in proposing Bible cross-references.
the Gram-Schmidt based anchors produce better true positive rates for very low false positive rates.
neutral
train_98697
In general, both summarization and chat generation tasks often use automatic evaluation metrics to evaluate generated sentences, their scores tend to be much lower in the chat generation task.
i heard that major beer companies are planning to increase beer production by about 10% this summer compared to last summer.
neutral
train_98698
Such situations are growing more popular in recent years with the rise of voice-controlled conversation systems such as intelligent assistants (e.g., Siri, Alexa, and Cortana) (Jiang et al., 2015;Sano et al., 2016;Akasaki and Kaji, 2017) and smart speakers (e.g., Amazon Echo and Google Home).
on a suicide bombing that happened at a concert in England, the homeless action around Separate (Gen+MMI): I heard that the homeless action around the scene of a suicide bombing that happened in England was praised.
neutral
train_98699
(2016) proposed two learning based methods for an LSTM encoder-decoder: LenEmb and LenInit.
for English, the proposed method also generated headlines with the desired length precisely and achieved the top ROUGE scores on the DUC-2004 test set.
neutral