id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5300
Use of the second text discourages this so that both the encoder and decoder are trained on text from the target domain (enabling use of an expanded, joint vocabulary trained on both source and target) to learn its style and vocabulary.
the artificial titles will generally be different from the real titles, which may lead to lower summarization performance.
contrasting
train_5301
the same time, the labeled data domain is switched to the source domain, so that both the embedding and decoder domains are abruptly changed.
in ASADA the embedding is gradually adapted from the target domain to jointly embed the source and target (F1).
contrasting
train_5302
beam hypotheses generated by FAS.
6 to the validation sample, the fraction of incorrect summaries increases from 26% to 29% (Table 2), demonstrating that the slight improvement on the validation data does not transfer to the test set.
contrasting
train_5303
current NLI models are not yet robust enough for our downstream task.
the stateof-the-art performance on common NLI datasets is already very close to human performance (Nikita and Bowman, 2019), suggesting that new datasets, such as the one presented here, are necessary to expose the models' remaining limitations.
contrasting
train_5304
the passage-question pairs to address) without training examples.
jia and Liang (2017) revealed that intentionally injected noise (e.g.
contrasting
train_5305
The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words (Weissenborn et al., 2017;Mihaylov and Frank, 2018).
this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge.
contrasting
train_5306
Prior work has shown that a pipeline of retriever, reader, and reranker can improve the overall performance.
the pipeline system is inefficient since the input is re-encoded within each module, and is unable to leverage upstream components to help downstream training.
contrasting
train_5307
Confirming our intuition, Table 2 shows us that QuAC has the highest percentage of GEN-ERAL questions.
coQA and SQuAD, which allowed the question-asker to look at the passage, are dominated by SPEcIFIc questions.
contrasting
train_5308
In the previous work, the sub tasks are performed using pipelined models (Nie et al., 2019;Yoneda et al., 2018).
our approach performs evidence extraction and answer prediction simultaneously by regarding FEVER as an explainable multi-hop QA task.
contrasting
train_5309
Owing to the large amounts of unlabeled data and the sufficiently deep architectures used during pre-training, advanced LMs such as BERT are able to capture complex linguistic phenomena, understanding language better than previously appreciated (Peters et al., 2018b;Goldberg, 2019).
as widely recognized, genuine reading comprehension requires not only language understanding, but also knowledge that supports sophisticated reasoning (Chen et al., 2016;Mihaylov and Frank, 2018;Bauer et al., 2018;Zhong et al., 2018).
contrasting
train_5310
Recently, lots of neural network-based models have been proposed and achieved promising results in OpenQA.
the success of these models relies on a massive volume of training data (usually in English), which is not available in many other languages, especially for those low-resource languages.
contrasting
train_5311
Both translate-train and translate-test methods rely heavily on the quality of the machine translation system.
the quality of the machine translation system varies in different language pairs, depending on the size of parallel data and the similarity of the language pair.
contrasting
train_5312
Other approximations are possible: for example we could use q φ (z | x) as an importance sampling distribution to estimate the first integral.
we found the above approximation to be efficient and effective in practice.
contrasting
train_5313
Furthermore, contextualized word representations learned from large-scale unlabeled texts under language model training loss Radford et al., 2018;Devlin et al., 2018), are proven to be extensively helpful for many NLP tasks including dependency parsing (Che et al., 2018;Kitaev and Klein, 2018).
parsing performance drops dramatically when processing texts that are different from the training data, known as the domain adaptation problem.
contrasting
train_5314
As our reviewers point out, the semi-supervised domain-adaptation scenario, tackled in this work, is less realistic than the unsupervised counterpart, due to need of labeled target-domain training data, which is usually extremely expensive.
we believe that this work can be equally valuable and useful when there exist only dozens or hundreds of labeled targetdomain training sentences, which may be a feasible compromise for realistic applications of parsing techniques, considering that, as discussed above, purely unsupervised domain adaptation makes very limited progress.
contrasting
train_5315
We can see that although PB-train is much smaller than BC-train, the PB-trained parser outperforms the BC-trained parser by about 8% on PB-dev, indicating the usefulness and importance of target-domain labeled data especially when two domains are very dissimilar.
the gap between the ZX-trained parser and the BC-trained is only about 2% in LAS, which we believe has a two-fold reason.
contrasting
train_5316
Following most of the recent work, we apply the PTB-SD representation converted by version 3.3.0 of the Stanford parser.
this dependency representation results in around 1% of phrases containing two or three head words.
contrasting
train_5317
A natural practice to perform the task is to scan through the query text using the dictionary and treat terms matched with a list of entries of the dictionary as the entities (Nadeau et al., 2006;Gerner et al., 2010;Liu et al., 2015;.
this practice requires very high quality named entity dictionaries that cover most of entities, otherwise it will fail with poor performance.
contrasting
train_5318
This yields a directed acyclic graph that expresses various relations among words.
the fact that SDP structures are directed acyclic graphs means that we cannot apply standard dependency parsing algorithms to SDP.
contrasting
train_5319
In the left path, the model resolves adjacent arcs first.
in the right path, distant arcs that rely on the global structure are resolved first.
contrasting
train_5320
Therefore, we obtain: The model then creates semantic dependency arcs by iterating over the sentence as follows: 1 1 This algorithm can introduce circles.
circles 1 For each word w i , select a head arc from T τ i .
contrasting
train_5321
Among various neural networks for sequence labeling, bi-directional RNNs (BiRNNs), especially BiLSTMs (Hochreiter and Schmidhuber, 1997) have become a dominant method on multiple benchmark datasets Chiu and Nichols, 2016;Lample et al., 2016;Peters et al., 2017).
there are several natural limitations of the BiLSTMs architecture.
contrasting
train_5322
Results: Left-and right-branching baselines are constructed by assigning 21 random labels 7 to constituents in purely left-and right-branching trees.
both branching baselines perform poorly in this evaluation, due to the fact that there is no straightforward way to assign labels to constituent spans that may correspond to how gold labels are organized.
contrasting
train_5323
Previous work (Seginer, 2007;Ponvert et al., 2011;Shain et al., 2016;Jin et al., 2018b) show that using data likelihood as both the objective for optimization and the criterion for model selection, either implicitly (in the case of Bayesian models) or explicitly (in the case of EM), gives good results on grammar induction.
it is also known that data likelihood is only weakly correlated with parsing accuracy, especially at convergence (Smith, 2006;Johnson et al., 2007;Jin et al., 2018a).
contrasting
train_5324
Induction models of constituency grammars or trees usually use data likelihood as both the objective and the model selection criterion (Seginer, 2007;Johnson et al., 2007;Ponvert et al., 2011;Shen et al., 2018), but the weak correlation between data likelihood and parsing accuracy hints at the non-optimality of this practice (Smith, 2006;Headden et al., 2009;Jin et al., 2018a).
many linguistic and psycholinguistic theories propose constraints either as properties of natural language grammar or as constraints on human processing and acquisition.
contrasting
train_5325
2018, all the above methods use crossdomain NER data only.
we leverage both NER data and raw data for both domains.
contrasting
train_5326
As noted in Section 1, the recovery of unbounded dependencies, including wh-object questions, is a 10 Fowlie and Koller (2017) previously demonstrated that MGs without head movement could be parsed in O(n 2k+3 ) worst case time, which was already a dramatic improvement over Harkema's original result.
stanojević (2019) shows that adding head movement to Fowlie and Koller's system increases complexity to O(n 2k+9 ).
contrasting
train_5327
Previous works on Twitter sarcasm detection focus on the text modality and propose many supervised approaches, including conventional machine learning methods with lexical features (Bouazizi and Ohtsuki, 2015;Ptáček et al., 2014), and deep learning methods (Wu et al., 2018;Baziotis et al., 2018).
detecting sarcasm with only text modality can never be certain of the true intention of the simple tweet "What a wonderful weather!"
contrasting
train_5328
The model with only text modality fails to detect sarcasm as for example (c), though the word so is repeated several times in example (c).
with image and attribute modalities, our proposed model correctly detects sarcasm in these tweets.
contrasting
train_5329
#planetfitness #hiddenfee #mrmet Attributes: ball holding shoes little white In the example, the insulting gesture in the picture is contrast to the phrase 'thanks for'.
the model is unable to obtain the common sense that this gesture is insulting.
contrasting
train_5330
Our work is built on the success of deep keyphrase generation models based on neural sequence-to-sequence (seq2seq) framework (Meng et al., 2017).
existing models, though effective on well-edited documents (e.g., scientific articles), will inevitably encounter the data sparsity issue when adapted to social media.
contrasting
train_5331
• T2: Here, someone is referring to another person's recollection.
this text contains all the linguistic markers associated with assault disclosure.
contrasting
train_5332
Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics.
the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings.
contrasting
train_5333
Hashtags often carry very important information, such as emotion 1 Our toolkit along with the code and data are publicly available at https://github.com/mounicam/ hashtag_master 2017), sentiment (Mohammad et al., 2013), sarcasm (Bamman and Smith, 2015), and named entities (Finin et al., 2010;Ritter et al., 2011).
inferring the semantics of hashtags is nontrivial since many hashtags contain multiple tokens joined together, which frequently leads to multiple potential interpretations (e.g., lion head vs. lionhead).
contrasting
train_5334
STAN small is the most commonly used dataset in previous work.
after reexamination, we found annotation errors in 6.8% 5 of the hashtags in this dataset, which is significant given that the error rate of the state-of-theart models is only around 10%.
contrasting
train_5335
Pre-trained contextualized word embeddings (Peters et al., 2018;Devlin et al., 2019;Radford et al., 2018) have become increasingly common in natural language processing (NLP), improving stateof-the-art results in many standard NLP tasks.
beyond standard tasks, NLP tools are also vital to more open-ended exploratory tasks, particularly in social science.
contrasting
train_5336
10 We can see a clear separation between the female love interests and the male protagonists and antagonists, thus identifying similar roles in the same way as a persona model.
whereas the output of a persona model is distributions over personas and vocabulary, our system outputs scores along known dimensions of power, agency, and sentiment, which are easy to interpret and visualize.
contrasting
train_5337
In the raw scores, stand-out powerful people include businessman Warren Buffet and Pope Francis.
the only 3 women, Theresa May, Janet Yellen, and Angela Merkel, are underscored as compared to similarly ranked men.
contrasting
train_5338
As expected, popular players Serena Williams and Andy Murray have the highest sentiment scores and very high power scores.
novak Djokovic, who has notoriously been less popular than his peers, has the lowest sentiment score, but the second highest power score (after Williams).
contrasting
train_5339
Additionally, female players are typically portrayed with more positive sentiment (female average score = 0.58; male average score = 0.54), whereas male players are portrayed with higher power (female average score = 0.52; male average score = 0.57).
the difference in power disappears when we remove frequency from the metric and use only the regression scores, suggesting that the difference occurs because male players are mentioned more frequently.
contrasting
train_5340
Typically, given the claims, models are learned from auxiliary relevant sources such as news articles or social media responses for capturing words and linguistic units that might indicate viewpoint or language style towards the claim Rashkin et al., 2017;Popat et al., 2017;Dungs et al., 2018).
the factuality of a claim is independent of people's belief and subjective language use, and human perception is unconsciously prone to misinformation due to the common cognitive biases such as naive realism (Reed et al., 2013) and confirmation bias (Nickerson, 1998).
contrasting
train_5341
This may be suitable for news data as the salient news sentences often straightforwardly comment on the claim's veracity.
some claim verification tasks such as FEVER (Thorne et al., 2018a) are particularly defined to classify if the factual evidence from the source like Wikipedia, which rarely remark on the veracity of the mutated claim, can infer the claim as being supported, refuted or NEI.
contrasting
train_5342
Instead of ranking sentences into the top-k positions, we pay more attention on claim verification accuracy by embedding and aggregating the useful sentences as evidence like we have explained above.
such discrepancy inspires us to investigate in the future an end-to-end approach to jointly model evidence retrieval and claim verification in a unified framework based on our sentence-level attention mechanism.
contrasting
train_5343
$$) as a label, since 2 is the class of the restaurant that both authors visited most.
as can be seen from the column labels, the first reviewer visited restaurants belonging to all four classes, while the second one only visited restaurants of class 2: the second reviewer is clearly a less noisy data point.
contrasting
train_5344
Due to the nature of the labels (ranging across four classes related to increasing price), this could be seen as an ordinal regression problem.
following standard practice within the author profiling literature (Rangel Pardo et al., 2015;Rangel et al., 2016), especially regarding modelling age (where real values are binned into discrete classes), we treat this as a classification task.
contrasting
train_5345
Similar to the text classification results, performance on random and event splits are comparable.
there is a sharp drop in performance for time split.
contrasting
train_5346
Clearly the performance improves as more social links are available.
even with little social links provided in the latter case, our joint model propagates information effectively and results in an increase in performance compared to text classification.
contrasting
train_5347
Regarding the representativeness of our sample from the population of celebrities, we may cautiously claim to have obtained a wide cross-section of people of elevated status.
celebrities are excluded who do not use Twitter, whose account is not verified (which is exceedingly unlikely, the more famous they are), or who have no Wikipedia article about themselves.
contrasting
train_5348
In recent centuries, industrialization has considerably changed human society by providing a stimulus to economic growth and improved life quality.
the advancement is accompanied by the increase in air pollutant emissions and risks to public health.
contrasting
train_5349
We achieve higher accuracy (55.3%) as compared to tweets BOW (54.6%).
the model is less effective than using follow Bio BOW.
contrasting
train_5350
An end-to-end trainable dialog system requires thousands of dialogs for training.
the availability of the training data is usually limited as real users have to be involved to obtain the training dialogs.
contrasting
train_5351
Some domain adaptation work has been done on dialog states tracking (Mrkšić et al., 2015) and dialog policy learning (Vlasov et al., 2018) as well.
there is no recent work about domain adaptation for a seq2seq dialog system, except ZSDG Zhao and Eskénazi (2018).
contrasting
train_5352
Our full model produces more non-singleton coreference chains, suggesting greater coherence, and also gives different mentions of the same entity more diverse names.
both numbers are still lower than for human generated stories, indicating potential for future work.
contrasting
train_5353
The fusion model does not perform any implicit coreference to associate the allergy with his dog.
coreference entity fill produces a high quality completion.
contrasting
train_5354
Our method can complement and correct original MR with additional slot values described in the paired texts to effectively alleviate generating contradicted facts.
due to the imperfection of NLU model, our method may ignore part of slot values realized in utterances and produce some additional errors.
contrasting
train_5355
Previous work focuses on automatic commenting based solely on textual content.
in real-scenarios, online articles usually contain multiple modal contents.
contrasting
train_5356
(2015)) and copy mechanism (Gu et al., 2016) into a sequential generative architecture.
most models tended to confound the dialog history with KB tuples and simply stored them into one memory.
contrasting
train_5357
Thus the most straightforward method is to use T 0 as sem [x, Q, clues].
the last few layers in BERT are mainly in charge of transforming hidden representations for span predictions.
contrasting
train_5358
There is at most one answer span (y, start, end) in every paragraph, thus gt start ans is an one-hot vector where gt start ans [start] = 1.
multiple different next-hop spans might appear in one paragraph, so that gt start hop [start i ] = 1/k where k is the number of next-hop spans.
contrasting
train_5359
Once combined with information retrieval, our model finally gets the answer "Marijus Adomaitis" while the annotated ground truth is "Ten Walls".
when backtracking the reasoning process in cognitive graph, we find that the model has already reached "Ten Walls" and answers with his real name, which is acceptable and even more accurate.
contrasting
train_5360
Many neural models have been proposed to tackle the machine RC/QA problem (Seo et al., 2016;Xiong et al., 2016;Tay et al., 2018), and great success has been achieved, especially after the release of the BERT (Devlin et al., 2018).
current research mainly focuses on machine RC/QA on a single document or paragraph, and still lacks the ability to do reasoning across multiple documents when a single document is not enough to find the correct answer.
contrasting
train_5361
A lot of systems have been proposed to solve the multi-hop RC problem with these data sets (Sun et al., 2018;Wu et al., 2019).
these data sets require multi-hop reasoning over multiple sentences or multiple common knowledge while the problem we want to solve in this paper requires collecting evidences across multiple documents.
contrasting
train_5362
Such datasets emphasize the role of locating, matching, and aligning information between the question and the context.
some recent multi-document, multi-hop reading comprehension datasets, such as WikiHop and MedHop (Welbl et al., 2017), have been proposed to further assess MRC systems' ability to perform multi-hop reasoning, where the required evidence is scattered in a set of supporting documents.
contrasting
train_5363
Specifically, it encodes the leaf document of the reasoning chain while attending to its ancestral documents, and outputs ancestor-aware word representations for this leaf document, which are compared to the query to propose a candidate answer.
these two components above cannot handle questions that allow multiple possible reasoning chains that lead to different answers, as shown in Fig.
contrasting
train_5364
Our best system achieves 60.3 on the hidden test set 11 , outperforming all current models on the leaderboard.
as reported by Welbl et al.
contrasting
train_5365
The last few years have witnessed significant progress on text-based machine reading comprehension and question answering (MRC-QA) including cloze-style blank-filling tasks (Hermann et al., 2015), open-domain QA (Yang et al., 2015), answer span prediction (Rajpurkar et al., 2016(Rajpurkar et al., , 2018, and generative QA (Nguyen et al., 2016).
all of the above datasets are confined to a single-document context per question setup.
contrasting
train_5366
However, this 1-hop, similarity-based selection process would fail on multi-hop readingcomprehension datasets like WikiHop because the query subject and the answer could appear in different documents.
our Document Explorer can discover the document with the answer "Loon op Zand" (in Fig.
contrasting
train_5367
(2018) selected relevant sentences from long documents in a singledocument setup and achieved faster speed and robustness against adversarial corruption.
none of these models are built for multi-hop MRC where our EPAr system shows great effectiveness.
contrasting
train_5368
3) combines the control unit from the stateof-the-art multi-hop VQA model and the widelyadopted bi-attention mechanism from text-based QA to perform composite reasoning on the context and question.
bridge Entity Supervision even with the multi-hop architecture to capture a hopspecific distribution over the question, there is no supervision on the control unit's output distribution cv about which part of the question is important to the current reasoning step, thus preventing the control unit from learning the composite reasoning skill.
contrasting
train_5369
Intuitively, in context-based path encodings, limited and more fine-grained context is considered due to the use of specific entity locations.
the passage-based path encoder computes the path representations considering the entire passage representations (both passages which contain the head entity and tail entity respectively).
contrasting
train_5370
Empirically, WMD has improved the performance of NLP tasks (see §6), specifically sentence-level tasks, such as image caption generation (Kilickaya et al., 2017) and natural language inference (Sulea, 2017).
its cost grows prohibitively as the length of the documents increases, and the BOW approach can be problematic when documents become large as the relation between sentences is lost.
contrasting
train_5371
(2014) use automatic suggestions for de-identification of medical texts and find no change in inter-annotator agreement or annotation time.
to these works, we use a control group of two annotators, who never receive suggestions, and compare the performance of all annotators to previous annotations they performed without annotation suggestions.
contrasting
train_5372
Thus, there is little difference to reference phase S1, where SUGGESTION already yielded a 0.02 higher IAA.
below we discuss the helpfulness and time saving of suggestions even in MeD.
contrasting
train_5373
The previous section established that annotation suggestions have positive effects on annotating epistemic activities.
these suggestions were only possible since Schulz et al.
contrasting
train_5374
In fact, in the CUM online learning setup (bundle size 1), the model adjustment time is similar to, and after 100 documents higher than the time needed for annotation (1-2 minutes per text as shown in Table 5).
the adjustment times of CUM with a bundle size of 10 or higher and of RETRAIN, are lower than the time needed for annotating the respective bundle of texts.
contrasting
train_5375
Comparing between Deep Neural Network (DNN) models based on their performance on unseen data is crucial for the progress of the NLP field.
these models have a large number of hyper-parameters and, being non-convex, their convergence point depends on the random values chosen at initialization and during training.
contrasting
train_5376
the learning rate and the optimization algorithm) the range of feasible values reflects the intuitions of the model author, and the tuned value provides some insight about the model and the data.
for many other hyper-parameters (e.g.
contrasting
train_5377
In such cases we would not want to determine that X ∼ F is stochastically dominant over Y ∼ G because its CDF is strictly above the CDF of Y , and hence Y is stochastically larger than X.
according to this relaxation, X ∼ F is indeed stochastically larger than Y ∼ G. Almost Stochastic Dominance To overcome the limitations of the above straight forward approach, and define a relaxation of stochastic order, we turn to a definition that is based on the proportion of points in the domain of the participating distributions for which SO holds.
contrasting
train_5378
When min (F n , G m , α) = 0, algorithm A is stochastically dominant over B.
if min (F n , G m , α) ≥ 0.5, then F is not almost stochastically larger than G (with confidence level 1 − α) and hence we should accept the null hypothesis that algorithm A is not superior to algorithm B. del Barrio et al.
contrasting
train_5379
Note that the opposite direction does not always hold, i.e., it is easy to come up with an example where P (X ≥ Y ) > 0.5 but there is no stochastic order between the two random variables.
the opposite direction is true with an additional assumption that the CDFs do not cross one another (which we do not prove here).
contrasting
train_5380
Shallow surfacelevel metrics, such as BLEU and TER (Papineni et al., 2002;Snover et al., 2006), still predominate in practice, due in part to their reasonable correlation to human judgements, and their being parameter free, making them easily portable to new languages.
trained metrics (Song and Cohn, 2011;Stanojevic and Sima'an, 2014;Ma et al., 2017;Shimanaka et al., 2018), which are learned to match human evaluation data, have been shown to result in a large boost in performance.
contrasting
train_5381
We find that on the same number of training instances (3360), the model performs better on cleaner data compared to singly-annotated data (r = 0.57 vs 0.64).
when we have a choice between collecting multiple annotations for the same instances vs collecting annotations for additional instances, the second strategy leads to more gains.
contrasting
train_5382
Among neural models, our BILSTM+MEM and BILSTM+BIA models outperform other comparisons by successfully modeling users' previous messages and their alignment with the topics of ongoing conversations.
the opposite observation is drawn for BILSTM+CON and BILSTM+ATT.
contrasting
train_5383
The veil is contrary to secularism.
secularism allows every citizen to freely profess his faith.
contrasting
train_5384
Several keywords from text (cat, game, winter) show types of content which are present in both image and text, but the image is merely an illustrating these concepts without adding additional information (Figure 5a).
the text is not represented in the image when it contains words specific of comments (when, lmao), questions (do, was), references (this) or ellipsis ('...'), all often referencing the content of the image as identified through data inspection.
contrasting
train_5385
Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.
posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.
contrasting
train_5386
This line of work exploits textual redundancy in massive streams predicting whether or not a document contains a new event as a clas-sification task.
we study the event schemas and extract detailed events.
contrasting
train_5387
To address this issue, the distant supervision method (Mintz et al., 2009) was proposed which annotated training data by heuristically aligning knowledge bases (KBs) and texts.
the long-tail problem in KBs (Xiong et al., 2018; 1: A data example of 5-way-5-shot relation classification in FewRel development set.
contrasting
train_5388
Similar to conventional prototypical networks (Snell et al., 2017), our proposed method calculates class prototype s via the representations of all support instances in this class, i.e., { s k } K k=1 .
instead of using a naive mean operation, we aggregate instance-level representations via attention over { s k } K k=1 , where each weight is derived from the instance matching score between s k and q.
contrasting
train_5389
Specifically, we adopt Kullback-Leibler (KL) divergence of both directions as the metric.
computing exact KL requires iterating over the whole entity pair space E × E, which is quite intractable.
contrasting
train_5390
As shown in Figure 2, the three baseline models could achieve moderate (0.1-0.5) positive correlation.
our model shows a stronger correlation (0.63) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments.
contrasting
train_5391
Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand.
these methods still rely on the conventional similarity metrics mentioned above.
contrasting
train_5392
A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers.
this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency.
contrasting
train_5393
2017, who use bandits to minimise the number of data queries required to calculate the F-scores of models.
this work does not consider the stochasticity of the resulting estimates or easily extend to other evaluation metrics.
contrasting
train_5394
This is encouraging, indicating that in these cases, i * and r are nearly "tied" in attention.
the picture of attention's interpretability grows somewhat more murky when we begin to consider the magnitudes of positive ∆JS values in Figure 3.
contrasting
train_5395
This is likely explained in part by the signal pertinent to the classification being distributed across a document (e.g., a "Sports" question in the Yahoo Answers dataset could signal "sports" in a few sentences, any one of which suffices to correctly categorize it).
given that these results are for the HAN models, which typically compute attention over ten or fewer sentences, this is surprising.
contrasting
train_5396
Ideally, we would then enumerate all possible subsets of that instance's components, observe whether the model's decision changed in response to removing each subset, and then report whether the size of the minimal decision-flipping subset was equal to the number of items that had needed to be removed to achieve a decision flip by following the ranking.
the exponential number of subsets for any given instance's sequence of components (word or sentence representations, in our case) makes such a strategy computationally prohibitive, and so we adopt a different approach.
contrasting
train_5397
how close they are in their totality.
diagnostic models answer a more specific question: to what extent a particular type of information can be extracted from a given representation.
contrasting
train_5398
One frequently resorts to largely qualitative evaluation: checking whether the conclusions reached via a particular approach have face validity and match pre-existing intuitions.
pre-existing intuitions are often not reliable when it comes to complex neural models applied to also very complex natural language data.
contrasting
train_5399
Parameter estimation for this model can be done by maximizing a lower bound E(φ, θ) on the loglikelihood of the data derived by application of Jensen's inequality: 2 (3) These latent rationales approach the first objective, namely, uncovering which parts of the input text contribute towards a decision.
note that an NN controls the Bernoulli parameters, thus nothing prevents this NN from selecting the whole of the input, thus defaulting to a standard text classifier.
contrasting