id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_94400
We demonstrate how to optimize the area under the receiver operating characteristic curve (AUC) effectively and also discuss how to adjust it to optimize other well-known evaluation metrics such as the accuracy and F 1measure.
one can learn a bipartite ranking function effectively by AUC optimization in our setting, which suggests the idea to learn a reliable ranking function first, and then adjust the threshold.
neutral
train_94401
multi-view spectral clustering (MVSC) (Kanaan-Izquierdo et al., 2018) is a competitive standard multi-view clustering approach.
here, the target utterances are sampled randomly from the corpus, and the context utterances are sampled from within each pair of adjacent utterances.
neutral
train_94402
To this end, we adopt prototypical networks (Snell et al., 2017), a metric learning approach, to solely rely on the encoders to form the classifiers instead of introducing additional classification layers.
multi-view spectral clustering can work with representations that are separately learned for the individual views and the multi-view information is aggregated using the common eigenvectors of the data similarity Laplacian matrices.
neutral
train_94403
ENTY"What was the first satellite to go into space?"
0.97 "What is the name of David Letterman's dog?"
neutral
train_94404
Hence, a mapping from D to a l-dimensional embedding, with l n, is naturally provided by the projection hidden function˜ x = c U S − 1 2 .
moreover, the investigated approach is is computationally affordable, as it roughly corresponds to a forward pass across the network.
neutral
train_94405
"What was the last year that the Chicago 0.73 "The film Jaws was made in what year?"
lRP allows to identify fragments of the input playing key roles in the decision, by propagating relevance backwards.
neutral
train_94406
For VAE-0.5 (Bowman et al., 2016), we implement KL annealing by increasing the KL weight linearly from 0.1 to 1.0 in the first 10 epochs and adopt word dropout rate of 0.5 to alleviate the posterior collapse.
the estimation and maximization of the MI in the high-dimensional space are difficult.
neutral
train_94407
We now expand on the brief literature review in Section 1 to better contextualize our work.
we also outperform K-center greedy Coreset at all sampling percentages without utilizing additional diversity-based augmentation.
neutral
train_94408
Thus, it seems to have a inductive bias for class boundaries, similar to the above works.
ensembles versus Single Models: A similar experiment was conducted to investigate the overlap between a single FTZ model and a probabilistic committee of models (5-model ensemble with FTZ (Lakshminarayanan et al., 2017)) to identify comparative advantages of using ensemble methods.
neutral
train_94409
At each iteration, we train the model on the current train set and use a model-dependent query strategy to acquire new samples from the pool, get them labeled by an oracle and add them to the train set.
our findings are as follows: (i) We find that utilizing the uncertainty query strategy using a deep model like FastText.zip (FTZ) 1 to actively construct a representative sample provides query and train sets with remarkably good sampling properties.
neutral
train_94410
(ii) We finds that a single deep model (FTZ) used for querying provides a sample set similar to more expensive approaches using ensemble of models.
we establish a new stateof-the-art baseline for further research in deep active text classification.
neutral
train_94411
In this paper, we show that if we have prior knowledge of such biases, we can train a model to be more robust to domain shift.
(2018) and Grand and Belinkov (2019).
neutral
train_94412
This reduces to bias product when g(x i ) = 1.
it is the features we will use in our bias-only model.
neutral
train_94413
Transforming it into a three-player game helps to improve the performance of the evaluation set while lower the accuracy of the complement predictor.
(4)-(6) is the global optimizer of Eq.
neutral
train_94414
Performance The clues in text classification tasks are typically short phrases (Zaidan et al., 2007).
we still spot quite a few cases.
neutral
train_94415
Hence, a possible third direction is to instead approximate the objectives to the un-normalized model distributions.
1 In the following, we will derive an objective function from a divergence D with the data distribution as first argument, and the second being the distribution of the parametric model: A large number of divergence measures has been introduced for a variety of applications (Basseville, 2013).
neutral
train_94416
They show gains of up to 1 point in perplexity, and a slight improvement over the MLE baseline for WT2, confirming the results obtained on a single model.
it is interesting to note that the power transformations will here be applied on the posterior classification probabilities p C θ instead of categorical probabilities p θ .
neutral
train_94417
In this work, we explore the possibility of affecting how words are learned depending on their frequency by using alternative loss functions.
their training cost grows linearly with the number of words in the vocabulary, often making it prohibitively slow.
neutral
train_94418
We display these values for our models trained with exact objectives in Table 4.
the three divergences presentend in Section 3 are defined on positive measures: in theory, we can simply use the exp function on the scores s θ and do not need to normalize them: neither the α and β divergences are scale invariant (see right column of table 1 and Cichocki and Amari 2010).
neutral
train_94419
For an input x, we consider perturbationsx, in which every word x i can be replaced with any similar word from the set S(x, i), without changing the original sentiment.
data augmentation is unable to cover the exponentially large space of perturbations that involve many words, so it does not prevent errors caused by changing many words.
neutral
train_94420
In addition, we would like to apply the proposed methods for other pretrained models.
the training loss of fine-tuning BERT tends to monotonously decrease along the optimization direction, which eases optimization and accelerates training convergence.
neutral
train_94421
Our contributions are: • A reinforcement learning formulation to guide question-asking strategies for learning from language.
4 While setting up the optimization problem, the value of µ can be adapted to reflect this intuition.
neutral
train_94422
In the E-step of the Posterior Regularization training (Ganchev et al., 2010), the computation of the posterior regularizer remains unchanged.
this follows a Wittgensteinian view of language as a cooperative game (Wittgenstein, 1953) between agents (here, the teacher and a learner) with a shared goal (here, building an effective classifier).
neutral
train_94423
At each step t, the learner's action a t consists of choosing a question to ask the teacher.
in scenarios where there is a lot of labeled data available enabling robust inductive inference, we would like to primarily rely on it rather than explanations.
neutral
train_94424
However, by compressing the web search results into a knowledge graph, we significantly reduce the number of tokens by an order of magnitude and make it possible for a model to access the entirety of the search information.
current approaches extractively select portions of web text as input to Sequence-to-Sequence models using methods such as TF-IDF ranking.
neutral
train_94425
To facilitate question answering, the dataset provides the top 100 web search hits from querying the question, which results in 200K words on average.
these approaches have been applied to extractive question answering tasks that require span identification, rather than abstractive text generation in an information synthesis setting.
neutral
train_94426
Compared with Edunov et al.
we also list the wMT18 top-2 systems for De!En translation in Table 4: RwTH (Graça et al., 2018) and UCAM (Stahlberg et al., 2018) systems, which are both ensemble models.
neutral
train_94427
Neural machine translation (briefly, NMT) (Bahdanau et al., 2014;Gehring et al., 2017;Vaswani et al., 2017) is well-known for its outstanding performance (Hassan et al., 2018), which usually relies on large-scale bitext for training.
when turning to clean tuning, we can obtain another 1.2 and 2.3 points improvement.
neutral
train_94428
We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future.
knowledge graph embedding models map relations and entities into continuous vector space.
neutral
train_94429
They use a score function to measure the truth value of each triple (h, r, t).
is the score for negative sample (h i , t ′ i ) corresponding to current positive entity pair should be small for task T r which represents the model can properly encode truth values of triples.
neutral
train_94430
Thus gradients of parameters indicate how should the parameters be updated.
furthermore, using loss gradient as one kind of meta information is inspired by MetaNet (Munkhdalai and Yu, 2017) and MAML (finn et al., 2017) which explore methods for few-shot learning by metalearning.
neutral
train_94431
Our MetaR is tested on different settings of datasets introduced in Table 3.
gradients of parameters indicate how should the parameters be updated.
neutral
train_94432
These errors on in-vocabulary terms can be explained by inconsistencies in annotation across the two domains: • In the PPCEME, to may be tagged as either infinitival (to, e.g., I am going to study) or as a preposition (p, e.g., I am going to Italy).
as a secondary evaluation, we measure performance on the full PTB tagset in Table 3, thereby enabling direct comparison with prior work (Yang and Eisenstein, 2016).
neutral
train_94433
These pronouns are significant sources of errors for baseline models: for example, a BERT-based tagger makes 216 errors on 272 occurrences of the pronoun thee.
we apply a simple two-step approach: 1.
neutral
train_94434
On the other hand, the last example is very difficult for humans (row 4), possibly due to the relatively neutral text.
the number of sides of a shape).
neutral
train_94435
The goal here is not to build an ensemble of DNNs to surpass current classification state of the art results, but instead to test our hypothesis to determine if machine RPs can fit IRT models that can benefit NLP tasks.
as this posterior is usually intractable, VI approximates it by the variational distribution: Where π θ j () and π b i () denotes different Gaussian densities for different parameters whose means and variances are determined by minimizing the KL-Divergence between q(θ, b) and π(θ, b|Y ).
neutral
train_94436
generate target sequences iteratively but require the target sequence length to be predicted at start.
we propose a method to exploit token embeddings of the pre-trained language model to warm-start the training of edits like appends and replaces which are associated with a token argument.
neutral
train_94437
For each pair of responses (one from ARAML and the other from a baseline, given the same input post), five annotators were hired to label which response is better (i.e.
aRaML a man is wearing a hat and holding a toothbrush as he stands on the grass of a field.
neutral
train_94438
We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art nonautoregressive NMT models and almost constant decoding time w.r.t the sequence length.
for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence z.
neutral
train_94439
To better understand why the approach achieves these gains, we designed the experiments on SCAN domain to address the following questions: (1) Does the proposed model work in the expected way that humans do (i.e., visualization)?
we thank Kenneth Church, Mohamed Elhoseiny, Ka Yee Lun and others for helpful suggestions.
neutral
train_94440
In training, We expect this is possible when we have prior knowledge that Y 1 depends only on X 1 and Y 2 depends only on X 2 .
both input words and Algorithm 1 Proposed approach.
neutral
train_94441
For TurnLeft task, the accuracy drops only in D, but not in B or C, indicating at least one of entropy regularization is necessary.
it appears with other combinations of words at test time.
neutral
train_94442
10, 000 examples are held out to serve as the validation set.
if we replace the [MASK] token with a pronoun instead of the correct candidate, the resulting sentence sometimes sounds unnatural and would not occur in a human-written text.
neutral
train_94443
While DBMs are able to correctly identify a small set of triples with relative features such as (skyscraper, apartment, tall) as discriminative, interpretation of relative relations requires two types of features which are not targeted by the models, namely: (i) the extraction of precise numerical reference points at scale (dealing with variations of dimensional units) and (ii) the ability to extrapolate the relations for unobserved lexemes by an explicit mechanism of comparative/transitive reasoning.
while incidental attributes can be captured as a second-order distributional phenomena, these corpora do not reflect explicit commonsense knowledge, in particular with regard to extra-linguistic phenomena.
neutral
train_94444
Definition Based Models: False negatives comprise the majority (83%) of all model errors.
discriminative attributes can also occur as incidental, sensorial or relative instances.
neutral
train_94445
The higher compression ratio renders it more challenging for the student model to absorb important weights.
with the number of epochs increasing, the student model learned with this vanilla KD framework quickly reaches saturation on the test set (see Figure 2 in Section 4).
neutral
train_94446
Different from previous knowledge distillation methods (Hinton et al., 2015;Sau and Balasubramanian, 2016;Lu et al., 2017), we adopt a patient learning mechanism: instead of learning parameters from only the last layer of the teacher, we encourage the student model to extract knowledge also from previous layers of the teacher network.
once the network has been trained, there will be a high degree of parameter redundancy.
neutral
train_94447
This further confirms what we showed in Figure 2.
this phenomenon is called Posterior Collapse, where the Kullback-Leibler (KL) divergence between the posterior and the prior (often assumed to be a standard Gaussian) vanishes.
neutral
train_94448
This is equivalent to maximizing the following evidence lower bound ELBO, In this case, Mean-field (Kingma and Welling, 2014) assumption is often used for simplicity.
uNK said the company 's uNK group is considering a uNK standstill agreement with the company traders said that the stock market plunge is a uNK of the market 's rebound in the dow jones industrial average one trader of uNK said the market is skeptical that the market is n't uNK by the end of the session the company said it expects to be fully operational by the company 's latest recapitalization i was excited to try this place out for the first time and i was disappointed .
neutral
train_94449
ization (Chen et al., 2008) to get the Cholesky factor L. The covariance matrix Σ = w • I + aa T formed in this way is guaranteed to be positive definite.
we assume that the members of variational family Q are dimensional-wise independent, meaning that the posterior q can be writ- The simplicity of this form makes the estimation of ELBO very easy.
neutral
train_94450
We hope our empirical study may potentially allow others to design better attention mechanisms given their particular applications.
for SP, removing PE from our proposed attention variant dramatically degrades the performance (from 24.28 to 30.92).
neutral
train_94451
The standard cross-entropy loss function is implemented, and the thresholds of all the classes are set as 0.5.
(4) Compared with OntoNotes and BBN datasets, the FIGER show a relatively less improvement when applying these policies.
neutral
train_94452
We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance.
we pose the following research questions: 1.
neutral
train_94453
large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics.
two heads (middle) demonstrate their ability to capture semantic relations.
neutral
train_94454
In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns.
effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%.
neutral
train_94455
During training, it takes one sentence as well as the cross-sentence (document-level) information as input and predicts the ground-truth translation sentence in other languages.
our approach contains two key components: mining implicitly aligned sentence pairs and aligning topic distributions.
neutral
train_94456
While the "no supervision at all" premise behind fully unsupervised CLWE methods is indeed seductive, our study strongly suggests that future research efforts should revisit the main motivation behind these methods and focus on designing even more robust solutions, given their current inability to support a wide spectrum of language pairs.
7 Experiments with other monolingual vectors such as the vocabularies to the 200K most frequent words.
neutral
train_94457
The work of Goran Glavaš is supported by the Baden-Württemberg Stiftung (AGREE grant of the Eliteprogramm).
the first attempts at fully unsupervised CLWE induction failed exactly for these use cases, as shown by .
neutral
train_94458
In addition, we investigate two different ways to capture relation and attribute features.
we construct count-based Nhot vectors X r and X a for these two aspects of features, respectively, where the (i, j) entry is the count of the j-th relation (attribute) for the corresponding entity e i .
neutral
train_94459
Feeding the next target token assumes that we know it in advance and thus calls for separate translation and alignment models.
we train the layer average baseline with a batch size of 7168 tokens on 64 Volta GPUs for 30k updates and apply a learning rate of 1e-3, β 1 = 0.9, β 2 = 0.98.
neutral
train_94460
Crowdsourced using Concept-Net (Speer and Havasi, 2012), these questions mostly probe knowledge related to factual and physical commonsense (e.g., "Where would I not want a fox?").
8 In order to ensure even higher quality, we validate the dev and test data a second time with five workers.
neutral
train_94461
The second column of Table 4 shows that our NMN outperforms the baseline significantly (+10 points in EM score) on the adversarial evaluation, suggesting that our NMN is indeed learning stronger compositional reasoning skills compared to the BiDAF baseline.
these results demonstrate the contribution of each module toward achieving a self-assembling modular network with the strong overall performance.
neutral
train_94462
We further train both models on the adversarial training set, and the results are shown in the last two columns of Table 4.
for multi-hop QA, directly matching the semantics of the question and context leads to the entity that bridges the two supporting facts (e.g., "Shirley Temple"), or the entities that need to be compared against each other (e.g., nationalities of Scott and Ed).
neutral
train_94463
CompTreeNN Premise and hypothesis are processed as a single aligned tree, following the structure of the composition tree in Figure 3.
these methods can provide powerful insights, but the issue of fairness looms large.
neutral
train_94464
In computer vision, it is common to adversarially train on artificially noisy examples to create a more robust model (Goodfellow et al., 2015;Szegedy et al., 2014).
let Dom be a map on N T that assigns a set to each node, called the domain of the node.
neutral
train_94465
These methods have the advantage that the complexity of individual examples can be precisely characterized without reference to the models being evaluated.
the essence of natural logic reasoning is recursive composition up a tree structure where the premise and hypothesis are composed jointly, so this bottleneck proves extremely problematic.
neutral
train_94466
This information (especially π ik ) is not present in any existing KB.
this information (especially π ik ) is not present in any existing KB.
neutral
train_94467
Both of XPAD's new biases encourage XPAD to make state-change predictions that result in more enables edges, either by counting them (g edge ) or summing their likelihoods (g kb ), resulting in some overlap in their overall effect (Table 6.2).
the g edge and g kb scores together help XPAD discover meaningful action dependency links.
neutral
train_94468
Our work is also related to the domain of question answering and reasoning in knowledge graphs (Das et al., 2018;Xiong et al., 2018;Hamilton et al., 2018;Xiong et al., 2017;Welbl et al., 2018;Kartsaklis et al., 2018), where either the model is provided with a knowledge graph to perform inference over or where the model must infer a knowledge graph from the text itself.
our benchmark suite-termed CLUTRR (Compositional Language Understanding and Text-based Relational Reasoning)-contains a large set of semi-synthetic stories involving hypothetical families.
neutral
train_94469
In particular, we collected paraphrases for stories containing k = 1, 2, 3 supporting facts and then replaced the entities from these collected stories with placeholders in order to re-use them to generate longer semi-synthetic stories.
we highlight some key diagnostic capabilities available via different variations of CLUTRR below.
neutral
train_94470
To get a sense of the data quality and difficulty involved in CLUTRR, we asked human annotators to solve the task for random examples of length k = 2, 3, ..., 6.
r contains the logical rules that govern all the generated stories in CLUTrr, while G contains the grounded facts that underlie a specific story.
neutral
train_94471
The accuracy of machine learned speech recognizers (Hinton et al., 2012) and speech synthesizers (van den Oord et al., 2016) are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets.
in this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time.
neutral
train_94472
ASSISTANT: I'm sorry, but it looks like the 4:10 and the 6:10 pm showings are sold out.
it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations (Williams and Young, 2007).
neutral
train_94473
ASSISTANT: What time is best for you?
human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system (Williams and Young, 2007;Serban et al., 2017).
neutral
train_94474
One set of instructions is for the "agents" and the other is for the "customers".
for example, "Please do this" cannot be interpreted without a broader context.
neutral
train_94475
On the other hand, Chit-chat style dialogues without goals have been popular since ELIZA and have been investigated with neural techniques (Weizenbaum, 1966;Li et al., 2016Li et al., , 2017.
for transferring money, you will also need: last 4 digits of account to move from, last 4 digits of account to move to, and the sum of money to be transferred.
neutral
train_94476
To our knowledge this is a novel collection strategy as we explicitly guide/prod the participants in a dialogue to engage in conversations with specific biases such as intent change, slot change, multi-intent, multiple slot values, slot overfilling and slot deletion.
chit-chat style dialogues without goals have been popular since ELIZA and have been investigated with neural techniques (Weizenbaum, 1966;Li et al., 2016Li et al., , 2017.
neutral
train_94477
We refer to the new utterance as the complete version of the original utterance.
or sequence-based metrics (i.e., BLEU and EM).
neutral
train_94478
DSTC 2 is a research challenge focused on improving the state-of-the-art in tracking the state of spoken dialogue systems.
each word can be transformed into vector by embedding lookup, and we sum up vectors in each tuple to be the input sequence of the context-aware memory.
neutral
train_94479
• LSTM (Tang et al., 2016a) uses the last hidden state vector of LSTM to predict sentiment polarity.
for convenience, we denote the output of the l-th layer for node i as h l i , where and h L i is the final state of node i.
neutral
train_94480
Trees Aiming to address the limitations of existing approaches (as discussed in previous sections), we leverage a graph convolutional network over dependency trees of sentences.
• Extensive experiment results verify the importance of leveraging syntactical information and long-range word dependencies, and demonstrate the effectiveness of our model in capturing and exploiting them in aspectbased sentiment classification.
neutral
train_94481
In this evaluation, we report precision, recall, and F1 scores.
here f * (•) is a ReLU-activated neural perceptron described above.
neutral
train_94482
from the Laptop domain.
unfortunately, these methods highly rely on prior knowledge (e.g., manually-designed rules) or external linguistic resources (e.g., dependency parsers), which are inflexible and prone to bringing in knowledge errors.
neutral
train_94483
• AD-SAL: it advances the AD-AL by conducting selective adversarial learning.
our model can still outperform them by a large margin, which demonstrates the effectiveness of the proposed methods.
neutral
train_94484
Our model can automatically model complicated relations among aspect and opinion words via the DMI as transferable knowledge.
we propose an empirically alternating strategy to train the L M +ρL O and L D iteratively, which separates the whole word representation learning into a discriminative stage and a domain-invariant stage.
neutral
train_94485
Aspect level sentiment classification is a finegrained sentiment analysis task.
• We extend CAN to multi-task settings by introducing ACD as an auxiliary task, and applying CAN on both ALSC and ACD tasks.
neutral
train_94486
The details will be introduced in Section 3.
aspect level sentiment classification (aLSC) is a fine-grained sentiment analysis task, which aims at detecting the sentiment towards a particular aspect in a sentence.
neutral
train_94487
We apply a dropout of p = 0.7 after the embedding and LSTM layers.
85.23% and 83.73% of the multiaspect sentences are non-overlapping in Rest14 and Rest15, respectively.
neutral
train_94488
In this paper, we introduce orthogonal regularization to constrain the attention weights of multiple non-overlapping aspects, as well as sparse regularization on each single aspect.
given the sentence S with K aspects, A s = {A s 1 , ..., A s K }, for each aspect A s k , its attention weights are calculated by: where u s k is the embedding of the aspect A s k , e L ∈ R L is a vector of 1s, u s k ⊗ e L is the op-eration repeatedly concatenating u s k for L times.
neutral
train_94489
As expected, our weakly supervised approach does not outperform the fully supervised (*-Gold) models.
our student model is an embedding-based neural network: a segment is first embedded (h i = EMB(s i ) ∈ R d ) and then classified to the K aspects (p i = CLF(h i )) (see Section 2.1).
neutral
train_94490
User-generated reviews can be decomposed into fine-grained segments (e.g., sentences, clauses), each evaluating a different aspect of the principal entity (e.g., price, quality, appearance).
the student is able to generalize better than the teacher and predict aspects even in segments that do not contain any seed words.
neutral
train_94491
Our goal is to predictŷ u,c ∈ [0, 1], which measures how likely user u will engage in conversation c. Here to estimateŷ u,c , two types of information are encoded: replying history of users and interaction structure of conversations.
we also observe that both baseline models work poorly.
neutral
train_94492
Other examples, an analysis of the US Presidential Election in 2016 (Allcott and Gentzkow, 2017) revealed that fake news was widely shared during the three months prior to the election with 30 million total Facebook shares of 115 known pro-Trump fake stories and 7.6 million of 41 known pro-Clinton fake stories.
one is that the number of training examples in RumourEval (including 5,568 tweets) is relatively limited as compared with PHEME (including 105,354 tweets), which is not enough to train deep neural networks.
neutral
train_94493
A manual examination of examples in the mixed categories found two main types of cases.
such manual approaches often require an exhaustive list of linguistic cues, which costs a significant amount of human effort and its generalizability may be limited by the small sample size.
neutral
train_94494
Table 2 shows there are fewer tweets targeting disability in Arabic compared to English and French and no tweets insulting people based on their sexual orientation which may be due to the fact that the labels of gender, gender identity, and sexual orientation use almost the same wording.
for deep learning based models, we run bidirectional LSTM (biLSTM) models with one hidden layer on each of the classification tasks.
neutral
train_94495
One option would be to manually map those labels onto one another.
category III is for predicting veracity; they encourage retrieving evidence documents as part of their task description, but do not distribute them.
neutral
train_94496
In this paper, we advance this line of research.
13 for the Stance dataset, N2V user representations are more informative.
neutral
train_94497
Press media, one form of mass media, manifests itself in large, diachronic collections of newspaper articles; such corpora provide a promising avenue for studying public opinion and testing theories, provided scholars can be confident that the measures they obtain over time are substantively invariant (Davidov et al., 2014).
the core intuition behind consistency regularization is that ensembled predictions are more likely to be correct than single predictions (Laine and Aila, 2017;tarvainen and Valpola, 2017).
neutral
train_94498
We show representations for both source domain samples (green) and target domain samples (blue).
the student network is updated via backpropagation, then the teacher network is updated with an exponential average of the student network's parameters (Tarvainen and Valpola, 2017).
neutral
train_94499
We further use several layers of transformer encoders (Vaswani et al., 2017) to learn the correlation between different feature fields.
the matrix product between query qW Q i and key HW K i after softmax normalization is an attention weight that indicates important words among the projected value vectors HW V i .
neutral