id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_94600
With the help of the proposed pruning method, our model can effectively alleviate the imbalanced distribution of arguments and non-arguments, achieving faster convergence during training.
roth and Lapata (2016) introduced syntactic paths to guide neural architectures for dependency SrL.
neutral
train_94601
During training and inference, the syntactic rule takes effect by excluding all candidate arguments whose predicate-argument relative position in parse tree is not in the list of top-k frequent tuples.
to investigate the most contribution of syntax to multilingual SRL, we perform experiments using the gold syntactic parse also officially provided by the CoNLL-2009 shared task instead of the predicted one.
neutral
train_94602
Wikipedia examples are longer paragraphs of 66 words on average.
we had initial success sharing the parameters of the two towers which allows training much deeper models without increasing the parameter count.
neutral
train_94603
(2) Are signals from relevance and semantic matching complementary?
additionally, the use of CNN layers allows us to explicitly control the window size for phrase modeling, which has been shown to be critical for relevance matching (Dai et al., 2018;Rao et al., 2019).
neutral
train_94604
Second, we employ the normalized weights to sum the extracted representations as the final syntactic representation for word w i , denoted as rep syn i .
this kind of sharing strategy somewhat weakens the representation framework maintains distinct model parameters for each task, due to the neutralization of knowledge introduced by the auxiliary task.
neutral
train_94605
7 Note that these sentential and phrasal paraphrases are obtained by automatic methods.
they aim to estimate features in a single sentence, which has little interaction with semantic equivalence assessment in a sentence pair.
neutral
train_94606
To further improve time efficiency, we optimize objectives of the variational E-step and M-step simultaneously instead of alternatively.
, K} is the vector for the encoding of the k-th column name in the table.
neutral
train_94607
A key part of the agent is a world model: it takes a percept (either an initial question or subsequent feedback from the user) and transitions to a new state.
under the MISP framework, we design an interactive semantic parsing system ( Figure 2), named MISP-SQL, for the task of text-to-SQL translation.
neutral
train_94608
The number of heads is set to 8.
we think that the low performance on big AMR graphs is mainly attributed to two reasons: • Big AMR graphs are usually mapped into long sentences while seq2seq model tends to stop early for long inputs.
neutral
train_94609
We employ a multi-layer graph attention network to propagate sentiment features from important syntax neighbourhood words to the aspect target.
the basic idea is that at layer 0 the hidden state for an aspect target node h t 0 is only dependent on the target's local features and at each layer l information related with the target from l-hop neighbourhood is added into the hidden state by the LStM unit.
neutral
train_94610
BERT-AVG uses the average of the sentence representations to train a linear classifier.
sentiment features can be propagated recursively from the leaf nodes to the root node.
neutral
train_94611
The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than handcrafted features.
one idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space.
neutral
train_94612
First of all, we compare our model EI with the work proposed by Zhang et al.
in general, our model Ei outperforms all the baselines.
neutral
train_94613
In this section, we investigate the effects of different routing iteration numbers.
for aspect term food, the sentimental polarity is posi-tive, but for aspect term price, the polarity is negative while for aspect term drinks, the polarity is neutral.
neutral
train_94614
The main challenge in aspect-level sentiment classification is that one sentence expresses multiple sentiment polarities, resulting in overlapped feature representation.
we propose a novel capsule network and iterative EM routing method with interactive attention (IACapsNet) to solve this problem.
neutral
train_94615
For instance, all baselines miss proper noun "wendys" in the first example.
the other type of error is the output that contains the original sentiment, which can be attributed to the failure of the separation of content and sentiment.
neutral
train_94616
Trait goes beyond these models by incorporating sentiments and attributes in a flexible way, which eliminates the model's dependency on specific attribute types.
for each user, we use 80% of reviews for training and 20% for testing.
neutral
train_94617
Throughout, * , †, and ‡ indicate significance at 0.05, 0.01, and 0.001, respectively.
it is not recognized by existing approaches.
neutral
train_94618
We construct 4 strategies relying on smooth transitions from a low state λ of each task weight varying with the number of epochs.
since the presence of a polarity implies the presence of at least one entity, we expect that a joint prediction will perform better than an entitiy-based predictor only.
neutral
train_94619
The entire network can be trained end-to-end to learn parameters α and β in addition to the sentence embedding and classifier, or the network can be optimized to learn only the weights α and β and weights of the classifier.
work such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) propose deeply connected layers to learn sentence embeddings by exploiting bi-directional contexts.
neutral
train_94620
all the aspect terms of an input sentence in one forward pass.
comparing with DIFD-AT+MMD and DIFD-AT+cORAL, DIFD is more robust considering that DIFD outperforms the two methods in most experimental settings.
neutral
train_94621
All models are optimized using Adam optimizer (Kingma and Ba, 2014) with gradient clipping equals to 5 (Pascanu et al., 2012).
numerous existing models (Tang et al., 2016b;Tay et al., 2017;Fan et al., 2018;Xue and Li, 2018) typically utilize an aspect-independent encoder to generate the sentence representation, and then apply the attention mechanism (Luong et al., 2015) or gating mechanism to conduct feature selection and extraction, while feature selection and extraction may base on noised representations.
neutral
train_94622
Based on the deep transition Gated Recurrent Unit (GRU) Pascanu et al., 2014;Miceli Barone et al., 2017;Meng and Zhang, 2019), an aspect-guided GRU encoder is thus proposed, which utilizes the given aspect to guide the sentence encoding procedure at the very beginning stage.
the results in 6 show that all modules have an overall positive impact on the sentiment classification.
neutral
train_94623
In this solution, two major challenges exist which are illustrated as follows.
during clause and word selection, a sentiment rating predictor is employed to provide reward signals to guide the above clause and word selection.
neutral
train_94624
However, this soft-attention mechanism has the limitation that the softmax function always assigns small but non-zero probabilities to noisy clauses, which will weaken the attention given to the few truly significant clauses for a particular aspect.
besides, λ 1 , λ 2 are weight parameters.
neutral
train_94625
On the other hand, when model training is finished, i.e., both high-level and low-level policy finish all their selections, the goal of sentiment rating predictor is to perform DASC.
during clause and word selection, a sentiment rating predictor is employed to provide reward signals to guide the above clause and word selection.
neutral
train_94626
This amounted to 103 rebuttal-speech pairs, since not all 55 GP-claims were mentioned in two speeches.
we thank the anonymous reviewers for their valuable comments, and Hayah Eichler for creating the initial GPR-KB.
neutral
train_94627
Next, we establish baseline results for determining whether a GP-claim is mentioned in a speech, and compare them to results obtained for iDebate claims.
these works present a neural-based generative approach, and experiment with user-written posts.
neutral
train_94628
Here bert is trained only on explicitly-mentioned claims, with respect to (ostensibly) semantically similar sentences.
to compare the bert baseline to others, the precision-recall curves for both prior 9 and w2v were computed over speeches from bert-test.
neutral
train_94629
A related task, that of generating a response which need not be a rebuttal or even argumentative, has been the subject of much research, especially in the context of dialog systems, chat bots, and question answering.
given a training set, the a-priori probability that a GP-claim will be mentioned in a speech can be computed.
neutral
train_94630
(2016), and low-rank adaptation methods (Jaech and Ostendorf, 2018;Kim et al., 2019), but these did not improve the model performance.
directly incorporating attributes into the weight matrix may cause harm in the performance of the model.
neutral
train_94631
Ex.1 shows an anecdotal example illustrating this behavior that the emotion cause clause c −1 adjoins the emotion word happiness.
the regularization term is formally expressed as: where m is a hyper-parameter for margin.
neutral
train_94632
Words or phrases that discourage critical thought and meaningful discussion about a given topic.
even taking into account that γ is a pessimistic measure (Mathet et al., 2015), these values are low.
neutral
train_94633
Labeling the object of the propaganda campaign as either something the target audience fears, hates, finds undesirable or otherwise loves or praises (Miller, 1939).
in our precision and recall versions, we give partial credit to imperfect matches at the character level, as in PD.
neutral
train_94634
Popular CNN architectures use convolution with a fixed window size over the words in a sentence.
for example in the sentence, "it is also stupider", we see "is" having the near majority share, even though it tells nothing about the sentiment of the sentence.
neutral
train_94635
Very deep CNN architectures were proposed based on character level features (Zhang et al., 2015) and word level features (Conneau et al., 2016) which significantly improved the performance in text classification.
we find max pooling to be very arbitrary in selection of crucial features and hence contributing minimal to the overall task.
neutral
train_94636
The outputs from the first subnetwork and second subnetwork are concatenated and connected to the output layer through a dense connection.
this does not relate the same way of selecting most relevant features from convoluted features in texts.
neutral
train_94637
Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media (Tan et al., 2016;Hidey and McKeown, 2018).
recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing (Peldszus and Stede, 2015;Park and Cardie, 2014), and argument component classification (Habernal and Gurevych, 2017;Mochales and Moens, 2011).
neutral
train_94638
We then follow the same procedure above, for fine-tuning.
incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better (p < 0.001) performance than the claim representation alone.
neutral
train_94639
In this section, we conduct experiments to validate our model which we denote as CDT on benchmark datasets.
to verify this assumption, we trace from the input embeddings to the final embedding.
neutral
train_94640
The state-of-the-art methods for representation learning have integrated dependency trees with neural networks.
the BiLStM and the GCN can be interpreted as message passing networks.
neutral
train_94641
These observations motivate us to develop a neural model which can operate on the dependency tree of a sentence, with the aim to make accurate sentiment predictions with respect to specific aspects.
(Mou et al., 2015) exploit the short paths of dependency trees to learn representations of sentences using convolutional neural networks, while preserving dependency information.
neutral
train_94642
• Target-agree (TA): similar to ST, but uses a forward NMT model with right-to-left decoder .
since the decision boundary does not have analytical form due to nonlinearity, computing the geometric distance is intractable.
neutral
train_94643
In real experiments, losses are averaged over all training data.
we design a more implicit loss to help the student refrain from the incoherent translation results by acting towards the teacher in the hidden-state level: where , and φ is a penalty function.
neutral
train_94644
We record the number of ROOT rule probabilities instead of predicting them since there are only a small number of such rules.
in this work, we propose a novel universal grammar induction approach that represents language identities with continuous vectors and employs a neural network to predict grammar parameters based on the representation.
neutral
train_94645
These assignments are non-arbitrary; indeed, Corbett (1991, Ch.
although this humorous take on German grammatical gender is clearly a caricature, the quote highlights the fact that the relationship between the grammatical gender of nouns and their lexical semantics is often quite opaque.
neutral
train_94646
Although there are many theories about the assignment of inanimate nouns to grammatical genders, to the best of our knowledge, the linguistics literature lacks any large-scale, quantitative investigation of arbitrariness of noun-gender assignments.
the portion of the lexicon where this relationship is clear usually consists of animate nouns; nouns referring to people morphologically reflect the sociocultural notion of "natural genders."
neutral
train_94647
Our approach begins by first retrieving the set of sentences X such that each sentence contains at least one referring expression that refers to an entity of the type we are doing perturbation on (person, in our case).
whether the PSA assumption holds in individual sentences will depend on the sentential context; however, the corpus-level trends that we measure in the model scores/labels are still indicative of systemic biases in the model.
neutral
train_94648
It is well-established that gender bias exists in language -for example, we see evidence of this given the prevalence of sexism in abusive language datasets (Waseem and Hovy, 2016;Jha and Mamidi, 2017).
the clusters we mentioned so far all lean heavily toward one gender association or the other, but some clusters are interesting precisely because they do not lean heavily -this allows us to see where semantic groupings do not align exactly with gender association.
neutral
train_94649
The label for a bag of sentences is calculated by averaging the k highest probabilistic scores.
first, we compare the positive hate crime labels predicted for Patch with the fBI's city-level hate crime reports.
neutral
train_94650
The optimizer was Adam (Kingma and Ba, 2014).
we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
neutral
train_94651
We can see that MOGANED achieves 1.6% and 1.7% improvement on precision and F 1 -measure, respectively, compared with the best baselines.
(2016)) only use the given sentences, which suffer from the low efficiency problem in capturing long-range dependency; dependency tree based methods utilize the syntactic relations (i.e., arcs) in the dependency tree of a given sentence to more effectively capture the interrelation between each candidate trigger word and its related entities or other triggers.
neutral
train_94652
As Figure 1 shows, we divide the concepts into two types: the superordinate concepts representing more abstractive concepts, and the finegrained argument roles.
aCE 2005 (LDC2006T06) is the most widelyused dataset in event extraction.
neutral
train_94653
Unlike most work on event extraction, we consider the realistic setting where gold entity labels are not available.
each token d i is predicted as an event trigger by assigning it a label t i .
neutral
train_94654
This shows that representing the likely upcoming sentence helps the model form discourse expectations, which the classifier can then use to predict the coherence relation between the actually observed arguments.
(2018)) or a causal graph for biomedical concepts and events.
neutral
train_94655
We can obtain the information of sentences boundaries in most cases, however, sometimes we cannot obtain the information of paragraph boundaries.
by replacing the similarity score, sim(,) with the dissimilarity score, 1−sim(, ), we can obtain the optimal tree in terms of the split score.
neutral
train_94656
The evaluation results on RST-DT and PCC showed the effectiveness of our proposal; the dynamic programming-based approach with the span merging score, exploiting three granularity levels in a document, achieved .811 and .784 span F 1 scores on RST-DT and PCC, respectively.
we cannot compare the score with our score because the test set differs from ours.
neutral
train_94657
Also concurrent, SpanBERT (Joshi et al., 2019), another self-supervised method, pretrains span representations achieving state of the art results (Avg.
yet, both variants perform much worse with longer context windows (512 tokens) in spite of being trained on 512-size contexts.
neutral
train_94658
For each label, we further rank all neurons based on their sensitivity, and obtain an importance ranking for the label.
although it remains unclear what information the neurons exactly encode, we speculate that there are at least two kinds of information, based on the observed patterns: • Coarse-grain types of the current word.
neutral
train_94659
In this work, we use the PLSR approach as a baseline for our model.
gathering and collating human-elicited property knowledge for concepts is very labour intensive, limiting both the number of words for which a rich feature set can be gathered, as well as the completeness of the feature listings for each word.
neutral
train_94660
The dataset is available online 1 .
it consists of 1,981 scenarios and 4,110 multiplechoice questions in the geography domain at high school level, where diagrams (e.g., maps, charts) have been manually annotated with natural language descriptions to benefit NLP research.
neutral
train_94661
In this paper, we introduce GeoSQA-an SQA dataset in the geography domain consisting of 1,981 scenarios and 4,110 multiple-choice questions at high school level.
we test the effectiveness of a variety of methods for question answering, textual entailment, and reading comprehension on GeoSQA.
neutral
train_94662
2 Moreover, we found that the drop in accuracy in ToM is mostly caused by memory questions.
while methods such as Recurrent Entity Networks have shown promise for keeping track of the state-of-the-world in our experiments, this is still in scenarios where the complexity of natural language is relatively simple.
neutral
train_94663
Almost all previous state-of-the-art QA and RC models find answers by matching pas- DrQA (Chen et al., 2017) 37.7 44.5 41.9 48.7 32.3 38.3 29.8 -R 3 (Wang et al., 2018a) 35.3 41.7 49.0 55.3 47.3 53.7 29.1 37.5 OpenQA (Lin et al., 2018) 42.2 49.3 58.8 64.5 48.7 56.3 28.7 36.6 TraCRNet (Dehghani et al., 2019) 43.2 54.0 52.9 65.1 ----HAS-QA (Pang et al., 2019) 43.2 48.9 62.7 68.7 63.6 68.9 --BERT (Large) (Nogueira et al., sages with questions, aka inter-sentence matching (Wang and Jiang, 2017;Wang et al., 2016;Seo et al., 2017;Song et al., 2017).
we can see that this method brings us 4.7% EM and 4.1% F 1 improvements.
neutral
train_94664
Does explicit inter-sentence matching matter?
during training, passages corresponding to the same question are taken as independent training instances.
neutral
train_94665
Hereafter, we use sliding window method.
we do not know whether it is still required for BERT.
neutral
train_94666
• We propose a Chinese span-extraction reading comprehension dataset which contains near 20,000 human-annotated questions, to add linguistic diversity in reading comprehension field.
use paraphrase or syntax transformation to add difficulties for answering.
neutral
train_94667
In Figure 2a, multigranular interaction (2:1) between the bi-gram "United States" and the uni-gram "USA" allows the matching.
in InsuranceQA, the variance 5894 of word signals are low.
neutral
train_94668
Then, we retrieve the neighbor triple of them, and reserve ones that contain lemmas of any token of the question.
the test set is not public, which needs to submit the model to the organization 1 to get the results.
neutral
train_94669
We align question tokens with column names and cell text using the Levenshtein edit distance between n-grams in the question and the table text, similar to previous work (Shaw et al., 2019).
the model is doing the right thing but missing one of the values.
neutral
train_94670
For nodes with multiple features, such as column and cell nodes, we reduce the set of feature embeddings to a single vector using the mean.
creating labeled data for this task can be expensive and time-consuming.
neutral
train_94671
On In-suranceQA, this strategy alone improves over previously reported results by a minimum of 1.6 points in P@1.
we used Max-Pooling and Max-Min-Pooling, the latter being obtained by concatenating the outputs of Max and Min Pooling.
neutral
train_94672
Obtaining such questions is hard for two reasons: (1) teaching crowdworkers about coreference is challenging, with even experts disagreeing on its nuances (Pradhan et al., 2007;Versley, 2008; Re-Byzantines were avid players of tavli (Byzantine Greek: τάβλη), a game known in English as backgammon, which is still popular in former Byzantine realms, and still known by the name tavli in Greece.
a: Barack Obama; Obama; Senator Obama Beyond training workers with the detailed instructions shown above, we ensured that the questions are of high quality by selecting a good pool of 21 workers using a two-stage selection process, allowing only those workers who clearly understood the requirements of the task to produce the final set of questions.
neutral
train_94673
The scores ρ are used for computing the probability P (d | P, Q) as well as for pruning.
the Discrete Reasoning Over Passages (DROP) (Dua et al., 2019) dataset demonstrates that as long as there is quantitative reasoning involved, there are plenty of relatively straightforward questions that current extractive QA systems find difficult to answer.
neutral
train_94674
We could also have crawled data from websites such as Yahoo Answers instead.
we have experimented with adding more Transformer layers on top of BERT but the performance did not improve.
neutral
train_94675
For example, "start" and "started" have LCS "start".
the trained seq2seq model is prone to generating these "safe" questions, similar to the undiversified response generation in seq2seq-based dialogue model .
neutral
train_94676
We present example story completions in Table 2 and full sampled stories in our appendix and Table 4.
we introduce a two-stage training pipeline ( Figure 1) to improve model performance both in terms of perplexity and CSR on story generation.
neutral
train_94677
DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 Figure 2: Illustration of different flavors of the investigated neural QE methods.
the three fine-tuned BERt versions clearly outperform all other methods.
neutral
train_94678
7 The BERT multi-task versions perform better with highly correlated qualities like Q4 and Q5 (as illustrated in Figures 2 to 4 in the supplementary material).
the latter seems to be aligned with the definitions of Q3 (Referential Clarity), Q4 (Focus) and Q5 (Structure & Coherence).
neutral
train_94679
That implies taking an action based on the current observation, where the action is picking a wordỹ t from the vocabulary V. r(ỹ, y) is a reward function with r(ỹ, y) = 1 if y = y.
experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-theart methods with a large margin.
neutral
train_94680
To rectify this, prior work has proposed losses which encourage overall coherency or other desired behavior (Li et al., 2016;Zhang and Lapata, 2017;.
similarly for LIM-ERICK, we consider only those samples to be acceptable which have line endings of the rhyming form AABBA.
neutral
train_94681
In the first example, the baseline fails to recognize Len-shaped as an adjective, while the unifiedmodel succeeds by utilizing lexical features which are the input of question type prediction layer.
our work solves this problem as Section 3.3 shows.
neutral
train_94682
(2018) incorporated a question word generation mode to generate question word at each decoding step, which utilized the answer information by employing the encoder hidden states at the answer start position.
baseline: How many provisions made provisions for concentrations?
neutral
train_94683
However, RL models have poor sample efficiency and lead to very slow convergence rate.
an interesting and efficient way to express the relation between China and Germany.
neutral
train_94684
In this paper, we introduce it as a DSR for deep RL.
rL methods usually start from a pretrained policy, which is established by optimizing XENT at each word generation step.
neutral
train_94685
• We also introduce a self-attention based database schema encoder that enables our model to generalize to unseen databases.
for the classification, we applied attention-based bi-directional LSTM following Zhou et al.
neutral
train_94686
The LM updated with textbook data (BERT+Textbook), improves performance on the domains included in additional pre-training (Phy and Gov).
• RQ3 can be answered as the updated BERT LM model does not appear to generalize well on unseen domains, as the evidence suggests that LM becomes more domain-specific.
neutral
train_94687
Automatic grading is the task of evaluating the correctness of a student answer for a specific question by comparing it to a reference answer.
we collect textbooks corresponding to the domains and chunk them into paragraphs and feed each paragraph as a document for pretraining.
neutral
train_94688
These all illustrate the diversity of linguistic and semantic challenges in WIQA.
each edge is labeled with a polarity, + or -, indicating whether the influence is positive (causes/increases) or negative (prevents/ reduces).
neutral
train_94689
For ESIM, we take the dimension of hidden states of BiLSTMs to be 500.
the relations in our knowledge graph come from two sources: the Metathesaurus and the Semantic Network of UMLS.
neutral
train_94690
With the emergence of knowledge graphs in different domains, the proposed approach can be tried out in other domains as well for future exploration.
we experiment with fusing embeddings obtained from knowledge graph with the state-of-theart approaches for NLI task, which mainly rely on contextual word embeddings.
neutral
train_94691
Caching reduces the baseline's decoding speed from 210 seconds to 128.5; CMLMs do not use cached decoding.
in terms of speed, each mask-predict iteration is virtually equivalent to a refinement iteration.
neutral
train_94692
Various non-autoregressive translation models, including our own CMLM, make the strong assumption that the individual token predictions are conditionally independent of each other.
our decoder is bi-directional, in the sense that it can use both left and right contexts to predict each token.
neutral
train_94693
In this work, we propose a new method for modeling the copying mechanism for APE.
it shows the heatmap of the Enc-Dec-Attention, which averages over 8 different heads.
neutral
train_94694
Census stands for the census data dataset and human stands for the human judgments dataset.
experiments suggest that our approach is effective in detecting human stereotypes, and is tied robustly to graph structure.
neutral
train_94695
What's more, our results indicate that gender bias learned from large-scale texts is different from that within our minds, introducing a new perspective for lexical-level stereotype-related research.
in this method, we consider only local consistency (without the second term of Equation 1).
neutral
train_94696
For each sentence in the sarcasm corpus S , a candidate negative situation phrase is extracted.
the notion behind this metric is that sarcasm typically requires more context than its literal version, requiring to have more words present at the target side.
neutral
train_94697
In typical RL settings, the learner is typically initialized to a random policy distribution.
sG RL:, same as sG NORMAL but also applies reinforcement learning, 3.
neutral
train_94698
After the candidates for a positive phrase are obtained, their Part of Speech tags are extracted with the help of a POS tagger.
such operations would require additional resources such as sentiment dictionary, sense disambiguation tools, whereas the neural classification based filtering can only work with binary sentiment labeled data.
neutral
train_94699
As shown in examples in Figure 3, we compare our phrase segmentationbased padding (2-2-3 schema) to two less common schemas (i.e.
during training, the padded lines are used instead of the original poem lines.
neutral