id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_91900 | Consequently, latent features are encouraged to cooperate and behave diversely to capture meaningful information for each sentence. | we consider three public datasets, the Penn Treebank (PTB) (Marcus et al., 1993;Bowman et al., 2015), Yahoo, and Yelp corpora He et al., 2019). | neutral |
train_91901 | In addition, we apply three SOTA document vectorization methods -doc2vec (Le and Mikolov, 2014), InferSent (Conneau et al., 2017), and BERT (Devlin et al., 2019) -and use the extracted vectors to train a linear regressor to predict emotion distributions. | considerable research efforts on small sample learning are emerging recently (Lake et al., 2015;Shu et al., 2018). | neutral |
train_91902 | Recently, a decent number of deep learning based models have been proposed for text generation. | in tasks that are set up to judge individual pieces of generated text (e.g., reviews, translations, summaries, captions, fake news) where there exists human-written ground-truth, it is better to use word-overlap metrics instead of adversarial evaluators. | neutral |
train_91903 | Human annotation is able to assess the quality of text more directly than task based evaluation. | notably, maximizing the adversarial error is consistent to the objective of the generator in generative adversarial networks. | neutral |
train_91904 | Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT. | smart batching achieves a speed-up of 89% on CPU and 48% on GPU. | neutral |
train_91905 | These tasks include large-scale seman-tic similarity comparison, clustering, and information retrieval via semantic search. | it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. | neutral |
train_91906 | To explain the ERM framework for bipartite ranking, in this section, let us assume that we are given positive data drawn from the class-conditional probability densities p(x|y = 1) and p(x|y = −1), respectively. | they can also pick an optimization method that is suitable for their model such as Adam (Kingma and Ba, 2014) or AMS-Grad (Reddi et al., 2018). | neutral |
train_91907 | If the class priors are not identical, we suggest not to use these metrics since it can be misleading because a good F 1 -measure in training data cannot guarantee a good F 1 -measure in the test data. | a good pseudo-labeling is an algorithm that can divide the data with a large gap (i.e., θ − θ is large). | neutral |
train_91908 | Greek is becoming the best university in the world. | (Erhan et al., 2010;Dahl et al., 2012) showed the effectiveness of pre-training for tasks such as speech recognition. | neutral |
train_91909 | The state-of-the-art performances have been significantly advanced for classification and sequence labeling tasks, such as natural language inference (Bowman et al., 2015), named-entity recognition, SQuAD question answering (Rajpurkar et al., 2016) etc. | results on four datasets show that PoDA can improve model performance over strong baselines without using any task-specific techniques and significantly speed up convergence. | neutral |
train_91910 | However, most of the clusters are very small: 91.7% of the clusters contain only 2-5 questions. | because of this, most of the works focus on few simple dialog intents and fail to explore the realistic complexity of user intent space (Williams et al., 2013;budzianowski et al., 2018). | neutral |
train_91911 | We use the neural encoder trained with the autoencoding objective to initialize the two utterance encoders in AV-KMEANS. | most of the clusters are very small: 91.7% of the clusters contain only 2-5 questions. | neutral |
train_91912 | 0.31 Cubs won the World Series?" | we generated Nyström representation of the Compositionally Smoothed Partial Tree Kernel function (Annesi et al., 2014) consistently with (Croce et al., 2017). | neutral |
train_91913 | The Layer-wise Relevance Propagation assigns to each dimension, or feature, x d a relevance score R < 0 correspond to evidence in favor or against, respectively, the output classification. | more formally, let f : R d → R + be a function that quantifies, for example, the probability of x ∈ R d being in a certain class. | neutral |
train_91914 | The consequence is that the variational posteriors would be more diverse in the latent space characterizing different input sequences, while the KL regularizer restricts the posteriors to match the Gaussian prior, with less "holes" in between where the decoder cannot be trained. | the estimation and maximization of the MI in the high-dimensional space are difficult. | neutral |
train_91915 | Acknowledgements: We thank Prof. Vineeth Balasubramium, IIT Hyderabad, India for the many helpful suggestions and discussions. | diversity-based query strategies (Sener and Savarese, 2018) are used to address this issue, by selecting a representative subset of the data. | neutral |
train_91916 | We analyze three algorithmic factors of relevance to sampling bias: (a) Initial set selection (b) Query size, and, (c) Query strategy. | it seems to have a inductive bias for class boundaries, similar to the above works. | neutral |
train_91917 | 2 For example, our VQA-CP ) bias-only model (see Section 5.2) uses the question type as input, because the correlations between question types and answers is very different in the train set than the test set (e.g., 2 is a common answer to "How many..." questions on the train set, but is rare for such questions on the test set). | for TriviaQA we use a 256 dimensional fully connected layer and 128 dimensional LSTMs, with highway connections between each BiL-STM (Srivastava et al., 2015) and a recurrent dropout rate of 0.2. | neutral |
train_91918 | Results: Table 4 shows the results. | we consider biases beyond partial-input cases (Feng et al., 2019), and show our method is superior on VQA-CP. | neutral |
train_91919 | Then build an augmented vector for each premise word by concatenating the word's embedding, the context vector, and the elementwise product of the two. | it is the features we will use in our bias-only model. | neutral |
train_91920 | a series of differentiable transformations h k : where z k is the vector of activations in the k-th layer and the final output z K consists of the logits for each class. | the specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. | neutral |
train_91921 | If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. | the true class logit should be greater or equal than those for all other classes y, which means the prediction remains constant. | neutral |
train_91922 | Transforming it into a three-player game helps to improve the performance of the evaluation set while lower the accuracy of the complement predictor. | table 5 shows the performance of subjective evaluations. | neutral |
train_91923 | Generator: The generator extracts R and R c by generating the rationale mask, z(•), as shown in Eqs. | 3 This problem happens because they have no control of the words unselected by R. Intuitively, in the presence of degeneration, some key predictors in X will be left unselected by R. by looking at the predictive power of R c , we can determine if degeneration occurs. | neutral |
train_91924 | For our sampling-based objectives, we use k = 1024 samples, and the unigram distribution as p n . | 2 may be simplified as following: We can further accelerate learning by using importance sampling on the second term, and because of the logarithm applied to the sum, we obtain an objective that is equivalent to composing the approximated softmax with γ-divergence, as can be seen in Appendix A.4. | neutral |
train_91925 | The large vocabulary sizes encountered in training corpora arguably stem from the fact that the frequency distribution of words in a corpus of natural language follows Zipf's law (Powers, 1998). | tracking these values during training 13 shows that they all behave very similarly to perplexity. | neutral |
train_91926 | There has been methods that aim to speed up neural CRF (Tu and Gimpel, 2018), and to solve the Markov constraint of neural CRF. | our model outperforms all the baselines on all the languages. | neutral |
train_91927 | We do not use adversarial training (Goodfellow et al., 2015) because it would require running an adversarial search procedure at each training step, which would be prohibitively slow. | their models were still be fooled by running a more expensive search procedure at test time. | neutral |
train_91928 | This is an unrealistic setting that limits the applicability of such models in real world scenarios. | the performance of the simple baselines on this dataset is shown in table 4. | neutral |
train_91929 | • Domain of learning task: Indicates the domain of the current classification task. | we discretize this ratio into three values, corresponding to the upper (HIGH), lower (LOw) and middle two interquartile ranges (MEDIUM) for the value of the ratio over all explanations in our data. | neutral |
train_91930 | A notable issue in learning from explanations, which we do not model here is that a teacher's multiple explanations of a concept can have a large variance in their utility to a learner. | automated learners should be capable of learning from a blend of observations, explanations and clarification. | neutral |
train_91931 | (2018), who train loglinear classifiers (with parameters θ) using natural language explanations of the individual classes and unlabeled data. | from the learner's perspective, over time the size of the unlabeled data decreases, and the labeled examples increase (see Equation 2). | neutral |
train_91932 | To alleviate the concerns to some extent, we consider the case in which the leakage is in the induced negative words distribution. | this setup lends itself to a discriminative training approach, which we demonstrate to work better than generative language modeling. | neutral |
train_91933 | Language models are traditionally evaluated with perplexity. | we tackle these three issues: we propose an ASRmotivated evaluation setup which is decoupled from an ASR system and the choice of vocabulary, and provide an evaluation dataset for English-Spanish code-switching. | neutral |
train_91934 | (2018b) construct graphs from procedural text to track entity position to answer when and if entities are created, destroyed, or moved. | the node weight and edge weight are equivalent to the number of merge operations + 1. | neutral |
train_91935 | We describe how symbolic graph representations of knowledge can be constructed from text. | we build graphs from substantially longer multi-document input and use them for multi-sentence text generation. | neutral |
train_91936 | (2017) leverage the source-side monolingual data to train the NMT system by learning reward func-4200 tion in a reinforcement learning framework. | in this work, we study how to use both the source-side and targetside monolingual data for NMT, and propose an effective strategy leveraging both of them. | neutral |
train_91937 | One of knowledge graph completion tasks is link prediction, predicting new triples based on existing ones. | entities only have one triple during training will make MetaR unable to learn good representations for them, because entity embeddings heavily rely on triples related to them in MetaR. | neutral |
train_91938 | modaira, 2000;Quiñonero-Candela et al., 2009;Daume III, 2007;Ben-David et al., 2010;Blitzer et al., 2011;Pryzant et al., 2017) or domaininvariant features (Ganin and Lempitsky, 2015;Tzeng et al., 2014). | many DRO problems admit efficient batch optimization procedures based on Lagrangian duality . | neutral |
train_91939 | Topic CVaR robustness beyond subpopulation shift. | unlike our work, these approaches use topics at test time by inferring the domain from the input variable x. | neutral |
train_91940 | Finally, language modeling objectives have previously been used for domain adaptation of text classifiers (Ziser and Reichart, 2018), but this prior work has focused on representation learning from scratch, rather than adaptation of a pretrained contextualized embedding model. | we are also interested to more thoroughly explore how to combine domain-adaptive and task-specific fine-tuning within the framework of continual learning (Yogatama et al., 2019), with the goal of balancing between these apparently conflicting objectives. | neutral |
train_91941 | Another source of out-of-vocabulary words is the addition of a silent e to the end of many words. | interestingly, domain-adaptive fine-tuning has no impact on the performance on the original tagging task. | neutral |
train_91942 | On the other hand, the last example is very difficult for humans (row 4), possibly due to the relatively neutral text. | with the methods proposed here those outputs can be used to learn the latent parameters of the data to focus in on what exactly is working well and what isn't with respect to the models being tested and the data used to train them. | neutral |
train_91943 | PIE2 However , there are always two sides to the stories . | the alternative of predicting insert independently at each gap with a null token added to Σ a performs 2.7 F 0.5 points poorly (table 6 row 4 vs row 2). | neutral |
train_91944 | And a higher BLEU score implies a better translation quality. | flowSeq has much larger gains in the decoding speed w.r.t. | neutral |
train_91945 | However, they are all relatively small, often created only as a test set. | the Winograd Schema is given as the "premise". | neutral |
train_91946 | Example: (cheek, brow, red) is sensory and (grapes, wine, fruit) is logical. | the current model is able to identify "skyscraper" as being taller than an "apartment", but fails to identify neither as taller than a "giraffe". | neutral |
train_91947 | Even fine-tuning the pre-trained model with taskspecific dataset may take several hours to finish one epoch. | ideally, we should pre-train BERT 6 [Large] and BERT 6 [Base] from scratch, and use the weights learned from the pretraining step for weight initialization in KD training. | neutral |
train_91948 | We also propose two different strategies for the distillation process: (i) PKD-Last: the student learns from the last k layers of the teacher, under the assumption that the top layers of the original network contain the most informative knowledge to teach the student; and (ii) PKD-Skip: the student learns from every k layers of the teacher, suggesting that the lower layers of the teacher network also contain important information and should be passed along for incremental distillation. | the distance between the teacher's prediction and the student's prediction can be defined as: where c is a class label and C denotes the set of class labels. | neutral |
train_91949 | the food is good , but the food is good this place is a great place to go for lunch . | the optimal variational posterior q * ∈ Q is then the one that minimizes the KL divergence Based on this, variational autoencoder (VAE) (Kingma and Welling, 2014) is proposed as a latent generative model that seeks to learn a posterior of the latent codes by minimizing the KL divergence between the true joint density p θ (x, z) the variational joint density q φ (z, x). | neutral |
train_91950 | VAE is able to learn a continuous space of latent random variables which are useful for a lot of classification and generation tasks. | we assume that the members of variational family Q are dimensional-wise independent, meaning that the posterior q can be writ- The simplicity of this form makes the estimation of ELBO very easy. | neutral |
train_91951 | The results are presented in Table 4. | by changing the kernel construction in Section 2.2.2, we can define a larger space for composing attention. | neutral |
train_91952 | Formally, a sequence with f i ∈ F being the nontemporal feature at time i and t i ∈ T as an temporal feature (or we called it positional embedding). | incorporating positional embedding into the attention mechanism may still improve performance. | neutral |
train_91953 | the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. | manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of selfattention map types that are repeatedly encoded across different heads. | neutral |
train_91954 | (2019) extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. | the gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for StS-B, to the maximum of 1.2% for MRPC (see Figure 8). | neutral |
train_91955 | Frankle and Carbin (2018) showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. | it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. | neutral |
train_91956 | The second baseline, referred as "NMT + Document Translation", is to treat the weakly aligned doc- 10.9 14.6 ---- NMT (Lample et al., 2018) 17.2 21.0 19.7 20.0 21.2 19.5 PBSMT (Lample et al., 2018) 17.9 22.9 --22.0 23.7 PBSMT + NMT (Lample et al., 2018) 20 uments as two long sentences and use them as a bilingual sentence pair to train NMT models. | 2) We leverage the topic consistency of two weakly paired documents and learn the sentence translation model by constraining the word distribution-level alignments. | neutral |
train_91957 | Different language versions of Wikipedia pages about the same entity/event are usually created by different people speaking different native languages, and therefore most sentences in two weakly aligned documents are not aligned. | the challenge is how to mine such sentence pairs from those document pairs. | neutral |
train_91958 | Ablation Study Our method leverages weakly paired documents in two ways. | if we translate an article from English to French sentence-by-sentence, the word distribution of the translated article should be generally similar to the word distribution of the corresponding article in French. | neutral |
train_91959 | We employ policy gradient methods to train the first agent with the target language loglikelihood as reward. | +LM ein alter mann in einer jacke beobachtet einen tisch . | neutral |
train_91960 | (2018) show that up to a certain layer performance of representations obtained from a deep LM improves on a constituent labeling task, but then decreases, while with representations obtained from an MT encoder performance continues to improve up to the highest layer. | for MLM, representations initially acquire information about the context around the token, partially forgetting the token identity and producing a more generalized token representation. | neutral |
train_91961 | Future work should move beyond the restrictive assumption by exploring new methods that can, e.g., 1) increase the isomorphism between monolingual spaces (Zhang et al., 2019) by distinguishing between language-specific and language-pairinvariant subspaces; 2) learn effective non-linear or multiple local projections between monolingual spaces similar to the preliminary work of Nakashole (2018); 3) similar to Vulić and Korhonen (2016) and Lubin et al. | even in these settings when the comparison to the weakly supervised FULL-SUPER+SYM is completely fair (i.e., (c) Swedish→ L2 Figure 3: A comparison of BLI scores on "easy" (i.e., similar) language pairs between the fully UNSUPERVISED model and a weakly supervised model (seed dictionary size |D 0 | = 200 or |D 0 | = 500) which relies on the self-learning procedure with the symmetry constraint (FULL+SL+SYM). | neutral |
train_91962 | Current unsupervised adversarial approaches show that it is possible to build a mapping matrix that aligns two sets of monolingual word embeddings without high quality parallel data, such as a dictionary or a sentence-aligned corpus. | (2018) and Chen and Cardie (2018), when setting β to less than 0.01, the orthogonalization usually performs well. | neutral |
train_91963 | Our approach is based on the intuition that mapping across distant languages is better done at the concept level than at the word level. | the performance of directly trained models is limited by their vocabulary size. | neutral |
train_91964 | A small distance reflects a high probability for an entity pair to be aligned as equivalent entities. | for DBP100K, the dimensionalities are set to 100, 50, and 50, respectively. | neutral |
train_91965 | As shown in Table 2, MAN and HMAN consistently outperform all baselines in all scenarios, especially HMAN. | we adopt SGD to update parameters and the numbers of epochs are set to 2,000 and 50,000 for MAN and HMAN, respectively. | neutral |
train_91966 | Following previous work Wang et al., 2018), we adopt the same split settings in our experiments, where 30% of the ILLs are used as training and the remaining 70% for evaluation. | the coverage of ILLs among existing KGs is quite low (Chen et al., 2018): for example, less than 20% of the entities in DBpedia are covered by ILLs. | neutral |
train_91967 | This suggests that our model can capture the characteristics of the source dataset via pretraining when using small supervision from language adaptation (i.e., small α). | the evaluation results on public benchmark datasets and comparison against current state-of-the-art approaches demonstrate the effectiveness of our approach. | neutral |
train_91968 | For instance, Bowman (2013) poses generalization tasks in which entire reasoning patterns are held out for testing. | to assign these labels, we translate each premisehypothesis pair into first-order logic and use Prover9 (McCune, 2005(McCune, -2010. | neutral |
train_91969 | We now formalize the idea of recursive tree-structured composition and this intuitive notion of fairness. | this enables us to create provably fair NLI tasks in Section 4. | neutral |
train_91970 | It would not be possible to use pretrained word vectors, due to the artificial nature of our dataset. | the essence of natural logic reasoning is recursive composition up a tree structure where the premise and hypothesis are composed jointly, so this bottleneck proves extremely problematic. | neutral |
train_91971 | We add the new task of also predicting why the actions are needed, in the form of the actions' effects (blue) and subsequent actions that depend on those effects (green). | this analysis suggests several ways that the dependency graph computation could be improved. | neutral |
train_91972 | The 2014 ProRead system (Scaria et al., 2013;Berant et al., 2014) included dependency relationships between events that it extracted, but assessed dependencies based on surface language cues, hence could not explain why those dependencies held. | they are unaware if they predict effects that have no apparent purpose in the process, possibly indicating a prediction error (e.g., the erroneous predictions in red in Figure 1). | neutral |
train_91973 | We found that time-constrained AMT annotators performed well (i.e., > 70%) accuracy for k ≤ 3 but struggled with examples involving longer stories, achieving 40-50% accuracy for k > 3. | overall, we find that the GAT baseline outperforms the unstructured text-based models across most testing scenarios (Table 2), which showcases the benefit of a structured feature space for robust reasoning. | neutral |
train_91974 | To generate these stories, we first design a knowledge base (KB) with rules specifying how kinship relations resolve, and we use the following steps to create semi-synthetic stories based on this knowledge base: Step 1. | unlike previous benchmarks in this domain-which are generally transductive and focus on leveraging and extracting knowledge graphs as a source of background knowledge about a fixed set of entities-CLUTRR requires inductive logical reasoning, where every example requires reasoning over a new set of previously unseen entities. | neutral |
train_91975 | Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems (Weizenbaum, 1966). | at the same time, the machine-oriented context of the interaction, i.e. | neutral |
train_91976 | Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. | learning from purely human-human based corpora presents challenges of its own. | neutral |
train_91977 | We combine these ELMO and GloVe embeddings via concatenation. | name, SSn number) or they can be unique to a domain-specific intent (e.g. | neutral |
train_91978 | As a sanity check, we also include a most frequent class (MFC) baseline. | past dialogue datasets do not make this distinction at speech act schema level. | neutral |
train_91979 | (2018); throughout the rest of the paper, we use the BERTbased architecture in our experiments. | by round 3, however, workers struggle to trick the system, earning an average score of only 1.6 out of 5. | neutral |
train_91980 | We can find that the GECOR model beats the baseline model in all respects. | for these cases, we tested our models and made statistical analysis on the three versions of data as shown in column 3, 4 and 5 of Table 3 (EM, EM 1, EM 2). | neutral |
train_91981 | Both probabilities from the two modes contribute to the final probability distribution over the extended vocabulary (the vocabulary plus the words from the dialogue context) which is calculated as follows: which is used to predict the final output word. | we only integrate the GECOR model with the copy mechanism into the dialogue system. | neutral |
train_91982 | the closest gas station is located at 200 alester ave 7 miles away would you like directions there Mem2Seq there is a valero 1 miles away HMNs there is a gas station located 2 miles away at 200 alester ave We test the hidden size in [64,128,256] and set dropout rate in [0.1, 0.2]. | attention weights in the last hop of the two memories,P kb and P his will be the probability of the target word from those memories. | neutral |
train_91983 | The gating mechanism applied is adopted from Bidirectional GRU (Cho et al., 2014a) in our case. | we adopt GRU as our controller. | neutral |
train_91984 | Then the representation of each node is updated with graph convolution operation with normalization factor (Kipf and Welling, 2017) as below: where g l−1 j ∈ R 2d h is the j-th token's representation evolved from the preceding GCN layer while h l i ∈ R 2d h is the product of current GCN layer, and A ij is degree of the i-th token in the tree. | (2014) containing twitter posts, while the other four (LAP14, REST14, REST15, REST16) are respectively from SemEval 2014 task 4 (Pontiki et al., 2014), SemEval 2015 task 12 (Pontiki et al., 2015) and SemEval 2016 task 5 (Pontiki et al., 2016), consisting of data from two categories, i.e. | neutral |
train_91985 | This is because the DMI can infer relational representations that capture some common latent relations between aspect and opinion words. | for the second stage, the domain classification loss is minimized by the domain discriminator parameters θ d while maximized by the feature learning parameters θ f via GRL (i.e., the features are (Pontiki et al., 2014). | neutral |
train_91986 | To solve that, the full model AD-SAL performs a local semantic alignment to dynamically focus on aligning aspect words that contribute more to the domain-invariant feature space. | at the final hop, we adopt a domain discriminator for each word with a gradient reversal layer (Ganin et al., 2016) to perform domain adversarial learning over its correlation vector (alignment). | neutral |
train_91987 | In such settings, a sentence S with K aspects will be copied to form K instances. | w a 1 ∈ R d×d , w a 2 ∈ R d×d and z a ∈ R d are the weight matrices. | neutral |
train_91988 | Multi-task learning can learn better hidden states of the sentence, and better aspect embeddings. | (2016) propose an attention-based LSTM network for aspect level sentiment classification. | neutral |
train_91989 | For each domain, we train our model on the training set without using any aspect labels, and only use the seed words G via the teacher. | although averages of seed words were used as "anchors" in the "Tandem anchoring" algorithm, we observed that the learned topics did not correspond to our aspects of interest. | neutral |
train_91990 | Thus, using the distillation loss for training, the student learns to use both seed words and non-seed words to predict aspects. | the aspect embeddings A k are initialized by clustering the vocabulary embeddings using kmeans with K clusters. | neutral |
train_91991 | Aspect labels (9class for product reviews and 12-class for restaurant reviews) are available for each segment 6 of the validation and test sets. | in ABAE, the K topics learned to reconstruct the segments are not necessarily aligned with the K aspects of interest. | neutral |
train_91992 | Stance of tweets is annotated as support, deny, query, and comment. | the embeddings of a sentence are represented as l×(dp+dw) . | neutral |
train_91993 | Garcia et al., 1997;Cofield et al., 2010;Lazarus et al., 2015;Chiu et al., 2017). | (2017) adopted a simplified taxonomy from (Sumner et al., 2014) for causal relations in research findings, and used SVM to develop a fourcategory classifier with a 0.718 F1-score, suggesting room for further performance improvement. | neutral |
train_91994 | While MSA represents the standardized and literary variety of Arabic, there are several Arabic dialects spoken in North Africa and the Middle East in use on Twitter. | mTSL Except for the directness, mTSL usually outperforms STSL or is comparable to it. | neutral |
train_91995 | This may make the process of optimizing the objective function more difficult and hinder the model to learn representative embeddings for the nodes. | for the vertex classification task, a logistic regression model is first trained to classify the embeddings into different categories based on the provided labels of nodes. | neutral |
train_91996 | First, we analyze the document counts for each decade bin, shown in Figure 8. | datasets used by political scientists are mostly homogeneous in terms of subject (e.g., immigration) or document type (e.g., constitutions). | neutral |
train_91997 | Typically, researchers first represent locations as earth grids (Wing and Baldridge, 2011;Roller et al., 2012), regions (Miyazaki et al., 2018;Qian et al., 2017), or cities (Han et al., 2013). | acc@161: The percentage of predicted cities which are within a 161 km (100 miles) radius of true locations to capture near-misses. | neutral |
train_91998 | Awry is the model previously proposed by Zhang et al. | our predictor consists of a multilayer perceptron (MLP) with 3 fully-connected layers, leaky ReLU activations between layers, and sigmoid activation for output. | neutral |
train_91999 | To address the growing problem of online hate, an extensive body of work has focused on developing automatic hate speech detection models and datasets (Warner and Hirschberg, 2012;Waseem and Hovy, 2016;Davidson et al., 2017;Schmidt and Wiegand, 2017;ElSherief et al., 2018a,b;Qian et al., 2018a,b). | for binary hate speech detection, we experimented the following four different methods. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.