id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_94000 | The small value of a i implies that there is a breakpoint between w i and w i+1 , so the constituent prior C is obtained from the sequence a as follows. | one prior attempt formulated this problem as a reinforcement learning (RL) problem (Yogatama et al., 2017), where the unsupervised parser is an actor in RL and the parsing operations are regarded as its actions. | neutral |
train_94001 | All values of a on top few layers are very close to 1, suggesting that those are not good breakpoints. | whether two words belonging to the same constituent is determined by "Constituent Prior" that guides the self-attention. | neutral |
train_94002 | (2017) rely on BiLSTMs and improve initial predictions from an competitive baseline and obtain state-of-art-results on English-to-French translation. | our model slightly reduces the number of R violations, while He et al. | neutral |
train_94003 | The basic factorized model got it wrong, assigning A1 to the argument 'state'. | this work is similar to energy network approach (Belanger and McCallum, 2016), while a global score function is provided, and approximate inference steps are used. | neutral |
train_94004 | In the baseline confusion matrix, we see the errors are fairly balanced for all the roles we consider here. | learning those constraints in a soft way might be beneficial. | neutral |
train_94005 | The forward pass for each document, n, starts from τ = 1 and computes: r − n,τ −1,ι e E ln T ι,j + ll n,τ (j), where r − n,0,ι = 1 where ι ='O' and 0 otherwise. | the VB algorithm is described in Algorithm 1, making use of update equations for the variational factors given below. | neutral |
train_94006 | 6 Despite this and their phylogenetic and geographical relatedness, they share very little vocabulary: only 6.5% of North Sámi tokens appear in Finnish data, and these words are either proper nouns or closed class words such as pronouns or conjunctions. | a popular approach is to train a parser on a related high-resource language and adapt it to the low-resource language. | neutral |
train_94007 | The latter serves as (Straka, 2018) for Galician and Uppsala for Kazakh. | when the number of training data is the lowest (T 10 ), data augmentations improves performance up to 9.3% LAS. | neutral |
train_94008 | For example, our framework improves the UAS score of Urdu by 15.7%, and Tamil by 7.3%. | here we use the oracle setting to reduce the noise introduced by corpus-statistics estimation errors. | neutral |
train_94009 | Natural language processing (NLP) techniques have achieved remarkable performance in a variety of tasks when sufficient training data is available. | with the constrained inference, especially the postposition constraint (C2), the proposed inference algorithm bring significant improvement. | neutral |
train_94010 | In the following, we derive the constrained inference algorithm for corpus-statistics constraints. | we explore several types of corpus linguistic statistics and compile them into corpus-wise constraints to guide the inference process during the test time. | neutral |
train_94011 | We add two variations of our model ORACLE-DISCRIM and PRETRAIN-ENC for the transfer scenario to see the performance improvements gained from replacing different components of our model with an oracle/near oracle. | then a decision is made on whether this subtree fails to match the relevant words in the utterance and needs to be replaced. | neutral |
train_94012 | Moreover, we present a detailed analysis of our proposed model and the performance of its different components through an oracle study. | the 'Adapt" module is further broken down into two modules "Aligner" and "Discriminator". | neutral |
train_94013 | Unlike prior studies, the outputs of the auxiliary task do not directly determine the entity boundaries but rather serve as features for assisting the main task. | unlike prior studies, the outputs of the auxiliary task do not directly determine the entity boundaries but rather serve as features for assisting the main task. | neutral |
train_94014 | As is shown in Figure 5, to enable TextCNN to predict scores for each word, and to make a fair comparison with SAC, we remove the pooling layer of TextCNN and adopt the same convolutional structure as SAC. | we chose CNN instead of RNN in SAC and we set the max kernel size to 3. | neutral |
train_94015 | Let the set of transitions of a derivation be: T (y) = T (a 1 . | lexical frequency was added as a covariate of noninterest, to statistically factor out effects of general word frequency (generally used in psycholinguistics), that may correlate with other types of expectations. | neutral |
train_94016 | We thank Sam Bowman from NYU University and Alon Talmor from Tel Aviv University for providing us the annotation information of the MNLI and COMMONSENSEQA datasets. | a common crowdsourcing practice is to recruit a small number of high-quality workers, and have them massively generate examples. | neutral |
train_94017 | Hence, our goal is to learn a single classifier that can adapt to the output of any selector operating at any budget. | these approaches process the entire text and encode words and phrases in order to perform target tasks. | neutral |
train_94018 | Given a commonsense head-relation-tail triple x = (h, r, t), we are interested in determining the validity of that tuple as a representation of a commonsense fact. | (2017) and Trinh and Le (2018) demonstrate a similar approach to using language models for tasks requiring commonsense, such as the Story Cloze Task and the Winograd Schema Challenge, respectively (Mostafazadeh et al., 2016;Levesque et al., 2012). | neutral |
train_94019 | As the goal of our experiments is to demonstrate the ability of our approach to reduce the number of parameters, we only consider rational baselines: the same rational RNNs trained without group lasso. | our heavily regularized models perform substantially better than the unigram baselines, gaining between 1-2% absolute improvements in four out of five cases. | neutral |
train_94020 | We use a slightly different notation in this section: for a word w the ith component of its embedding is given by E w,i . | a list of occupation names 3 by Bolukbasi et al. | neutral |
train_94021 | In this paper, we propose to apply metalearning algorithms in general language represen-tations learning. | to this end, we perform transfer learning experiments on a new natural language inference dataset, namely Scitail. | neutral |
train_94022 | At the same time, because f is invertible the average number of bits that can be encoded is given by the entropy H(q). | the small but positive KL value for arithmetic coding with k = 300 comes from the slight distributional difference when the long tail is truncated. | neutral |
train_94023 | To ensure that these texts seem more natural to human eavesdroppers we introduce two parameters to trade off quality for compression: we modulate the LM distribution by a temperature τ , and we truncate the distribution to a top-k tokens at each position to ensure generations do not include tokens from the long tail. | the compression objective is to maximize H(q). | neutral |
train_94024 | ROUGE is widely used to automatically evaluate summarization systems. | we build upon this line of work and show that cosine similarity between the reference and summary embedding works well, and better than ROUGE on recent datasets, for comparing single document summarization systems. | neutral |
train_94025 | Another difference between DUC'01 and DUC'02 is that the number of evaluated articles for each system is considerably larger than that in DUC'01. | rOUGE precision is a more suitable metric for evaluating the extent of unnecessary content and verbosity. | neutral |
train_94026 | Additionally, we show the example of summaries generated by our model and baseline model in Table 3. | abstractive summarization (Rush et al., 2015;Nallapati et al., 2016;See et al., 2017;Chen and Bansal, 2018) generates summaries word-by-word after digesting the main content of the document. | neutral |
train_94027 | In this study, we investigated several aspects of incorporating pseudo data for GEC. | aspect (i): multiple methods for generating pseudo data D p are available (Section 3). | neutral |
train_94028 | Our MTM has higher F1 than other models in both intra-and cross-lingual evaluations, while discovering coherent topics and meaningful topic links. | joint inference also allows insights from high-resource languages to uncover low-resource language patterns. | neutral |
train_94029 | On the relation extraction side, the recently developed deep learning approaches have also proved useful, by successfully combining information about the meaning of the arguments, their context and their grammatical connections (Zeng et al., 2014;Liu et al., 2015;dos Santos et al., 2015). | we further analyze the performance of the relation classifier in two scenarios: i) when the sentence includes information regarding a single person; and ii) when a given sentence includes information about two or more different persons. | neutral |
train_94030 | five times more simple sentences than complex ones. | we have presented a dataset of natural language answers to a questionnaire designed to obtain a patient's medical history. | neutral |
train_94031 | Table 6: The precision, recall, and F-score of our models for entity and relation classification. | aMT's complex sentences are much denser in information than the complex sentences in the GCS data -14.2 vs 6.1 relations per sentence. | neutral |
train_94032 | While most existing systems provide semantically correct responses given goals to present, they struggle to match the variation and fluency in the human language. | for example, it outperforms Slug (Juraska et al., 2018), the best model in E2E-NLG competition, by 2.2% in BLEU score. | neutral |
train_94033 | The dimension 2 (dim2) usually generates longer and informative responses. | following the techniques described above, ELBO then has the form: where and the KL-divergence term can be derived as * : where α, β are parameters of Dirichlet distribution q φ (z|x) and p(z) respectively, K is the dimension of z, and ψ is the Digamma function. | neutral |
train_94034 | Recurrent neural networks (RNNs) (Bengio et al., 2003) have achieved great success on many natural language processing tasks. | dir-VHREd is significantly better than others. | neutral |
train_94035 | At the scarce data levels (10% and 30%) , the pseudo-labelling model is not producing pseudo training points that help improve DST predictions. | the first is the DSt joint goal accuracy, defined as an average joint accuracy over all slots per turn (Williams et al., 2016). | neutral |
train_94036 | Since FT-AttRNN is only trained on new data, it is oftentimes overwhelmed by new knowledge and results in forgetting the old knowledge. | fT-Cp-AttRNN can be treated as a naive solution to achieve both goals by almost duplicating the model again and again. | neutral |
train_94037 | Each turker is given 100 utterances from training set in one dataset; as well as the number of groups G for this dataset. | the performance of FT-Lr-AttRNN is even lower than FT-AttRNN most of the time. | neutral |
train_94038 | (2015) proposed adding contextual signals to the joint IC-SL task. | they were followed by Shi et al. | neutral |
train_94039 | Following (Wu et al., 2017), we employ R n @ks, mean average precision (MAP), mean reciprocal rank (MRR) (Voorhees et al., 1999) and precision at position 1 (P@1) as evaluation metrics. | we do see improvement on the two data sets. | neutral |
train_94040 | This makes the prediction of those slots more challenging for the system. | gCAS's act-slot pair performance is far from perfect. | neutral |
train_94041 | In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table 2. | our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production taskoriented agent must handle. | neutral |
train_94042 | Similar experiments were conducted for evaluating unknown intent discovery models in Lin and Xu (2019). | out-of-scope recall improves compared to oos-train on Full but not OOS+. | neutral |
train_94043 | Suppose the original utterance is "What is the offer? | we show that this input-aware model performs on par with the input-agnostic one (where the controller outputs do not depend on the source inputs), and may need more epochs to expose the model to the many diverse policies it generates. | neutral |
train_94044 | (2018); Alaux et al. | we can see the number of auxiliary languages as a mean to qualitatively measure closeness of languages, by looking on the average number of auxiliary languages used for each source-target pair. | neutral |
train_94045 | The RL time is the average time needed to obtain the three RL models in Figure 2. generated action sequences for simultaneous translation, which leads to faster training and better policies than previous methods, without the need to retrain the underlying NMT model. | wait-training uses 8 GPUs, while others use 1 GPU. | neutral |
train_94046 | fore all WRITE actions usually has large latency. | in the following experiments, we report results with 4 http://www.statmt.org/wmt15/translation-task.html = 50 and = 3. | neutral |
train_94047 | The multilingual version of BERT (which is trained on Wikipedia articles from 100 languages and equipped with a 110,000 shared wordpiece vocabulary) has also demonstrated the ability to perform 'zeroresource' cross-lingual classification on the XNLI dataset (Conneau et al., 2018). | if language-adversarial training encourages language-independent features, then the English documents and their translations should be close in the embedding space. | neutral |
train_94048 | For +RPEHead (or +MPRHead), with the increasing in the dimension of d r , BLEU scores are gradually increasing, but BLEU scores begin to decrease when d r is more than 320 (or 256). | we propose a recurrent positional embedding approach based on part of word embedding instead of numerical indices of words to capture order dependencies between words in a sentence. | neutral |
train_94049 | In the Transformer network architecture, positional embeddings are used to encode order dependencies into the input representation. | (5), the final sentence representation S M is represented as: Note that for both RPEHead and MPRHead models, the RPEs will be jointly learned with the existing Transformer architecture. | neutral |
train_94050 | Indeed, its curves in the two figures show either a stable (It-En) or even a slightly downward trend (De-En) that confirms the known limitations of NMT to preserve sentiment traits of the source sentences (see Section 1). | to condition the generic models and obtain the Reinforce and MO-Reinforce systems, we use the polarity-labeled German/Italian tweets in our de-velopment sets, with reward as defined in Section 2. | neutral |
train_94051 | The most popular such method is known as byte-pair encoding (BPE), which merges iteratively the most frequent pair of symbols. | for ConvS2S, we used 4 layers and an embedding size of 256, dropout of 0.2, an initial step-size of 0.25. | neutral |
train_94052 | 3 A popular inference-time approach is constrained decoding (Anderson et al., 2016;Hokamp and Liu, 2017;Hasler et al., 2018;Post and Vilar, 2018), which modifies beam search to require that user-specified words or phrases to be present in the output hypotheses. | table 3 summarizes key exploratory results for the baseline, Ct, and CD approaches on the development set. | neutral |
train_94053 | (2016) use translation probabilities from a lexicon (like SMT phrase tables) in conjunction with NMT probabilities. | with statistical machine translation (SMT; Koehn et al., 2007), where there are several established methods of incorporating external knowledge, 1 recent work is still examining how best to incorporate bilingual lexicons into NMT systems. | neutral |
train_94054 | Semi-supervised approaches for NMT are often based on automatically creating pseudoparallel sentences through methods such as backtranslation (Irvine and Callison-Burch, 2013;Sennrich et al., 2016) or adding an auxiliary autoencoding task on monolingual data (Cheng et al., 2016;Currey et al., 2017). | we duplicate the number of parallel sentences by 5 times in the training data augmented with the reordered pairs. | neutral |
train_94055 | The first method is to use a fixed-latency policy, such as the wait-k policy . | world bank plans to remit and reduce debts of po-or-est countries Figure 5: Chinese-to-English example on dev set. | neutral |
train_94056 | Therefore, absolute position (Vaswani et al., 2017) or relative position (Shaw et al., 2018) are generally used to capture the sequential order of words in the sentence. | we conduct probing tasks 3 (Conneau et al., 2018) to evaluate structure knowledge embedded in the encoder output in the variations of the Base model that are trained on En⇒De translation task. | neutral |
train_94057 | Results for #1 demonstrate that NMT systems trained on 18k parallel sentences can achieve only poor results for Bn and Ja, whereas reasonably high BLEU scores (> 20) are achieved for the other target languages. | we can safely conclude that the gain is due to multilingualism. | neutral |
train_94058 | (2018a) that Transformers perform WSD better than RNNS2Ss. | gT is mainly funded by the Chinese Scholarship Council (NO. | neutral |
train_94059 | Ghader and Monz (2017) have shown that nouns have different attention distributions from other word types. | neural machine translation (nMT) models (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Cho et al., 2014;Bahdanau et al., 2015;Luong et al., 2015) have access to the whole source sentence for the prediction of each word, which intuitively allows them to perform word sense disambiguation (WSD) better than previous phrase-based methods, and Rios et al. | neutral |
train_94060 | It is also noticed from the table that 'Cascade' multi-task model achieves better performance than 'Noncascade' model, which implies that the unimpeded information from morpheme processing to POS tagging helps predict accurate POS tags. | it is inferred from the error analysis that developing a more accurate morpheme processing model is required to improve the performance of the proposed Korean morphological analyzer. | neutral |
train_94061 | Such an approach reduces the number of POS tags and is helpful in segmenting eojeols into morphemes precisely. | in Korean, n i=1 where is a concatenate operator. | neutral |
train_94062 | Most languages, especially those that are severely impacted by diacritics, rarely have such amounts of diacritized datasets. | the performance of Zalmout and Habash (2017)'s model falls in between BiLStM and A-tCN. | neutral |
train_94063 | All in all, the decision whether to use A-TCN or BiLSTM is a tradeoff between accuracy and efficiency. | orife (2018) uses seq2seq modeling which generate diacritized sentences that are not of the same length as the input and can generate words not present in the original sentence (hallucinations). | neutral |
train_94064 | Until recently, few approaches based on crosslingual joint training and cross-lingual supervised pre-training for DNNs have been proposed. | to the best of our knowledge, data selection has not yet been explored for DNNbased CLTL in SLU. | neutral |
train_94065 | The VSE baselines in the first five rows are trained with English and German descriptions independently. | recent advancement in VSE models explores methods to enrich the English-Image corpora. | neutral |
train_94066 | In particular, we (i) leverage ultra-fine-grained semantic labels (e.g., "golden gate bridge" vs. "bridge") for featurization (Juan et al., 2019); and, (ii) focus on scenarios in which object detection modules trained on Visual Genome (VG) (Krishna et al., 2017) are applied to out-of-domain images: image captioning on the Conceptual Captions dataset (Sharma et al., 2018), and VQA on the VizWiz dataset (Gurari et al., 2018). | moreover, this 1.8% improvement is a weighted average across answer types; the per-answer-type numbers indicate that 8 evalai.cloudcv.org/web/challenges/challengepage/102/overview our approach achieves even better improvements on two of the more difficult answer types, "number" (+4.5%) and "rest" (+3.3%). | neutral |
train_94067 | Object detection plays an important role in current solutions to vision and language tasks like image captioning and visual question answering. | most notably, the domain of images can be very different from Visual Genome, unlike in popular benchmarks such as mSCOCO (Lin et al., 2014). | neutral |
train_94068 | Nevertheless, these features are considerably complementary. | nevertheless, these features are considerably complementary. | neutral |
train_94069 | Prior metrics (e.g., BLEU4) only provide an overall quality score, which is difficult to infer specific description mistakes in a caption. | rule-based metrics (see Table 1). | neutral |
train_94070 | Prior metrics (e.g., BLEU4) only provide an overall quality score, which is difficult to infer specific description mistakes in a caption. | we measure relevance using standard cosine similarity. | neutral |
train_94071 | Comparison Results are shown in Tab. | we propose weakly Supervised Language Localization Networks (wSLLN) which requires only video-sentence pairs during training with no information of where the activities temporally occur. | neutral |
train_94072 | To our knowledge, this kind of approaches have not been applied to syntactic parsing. | we use a hard-sharing architecture where the BILSTMs are fully shared and followed by an independent feedforward layer (followed by a softmax) to predict the corresponding label for each of the tasks. | neutral |
train_94073 | 4 Multitask learning We previously argued that gaze-annotated data is unlikely to be available at inference time. | this parser obtains similar results to competitive transition-and graph-based parsers such as BISt (Kiperwasser and Goldberg, 2016) and can be taken as a strong baseline to test the effect of eye-movement data for dependency parsing. | neutral |
train_94074 | If o i is positive, then the head of w i is the o i -th token to the right that has the part-of-speech tag p i . | we denote LSTM θ (w) as the abstraction of a long short-term memory network, that processes an input sentence w=[w 1 , ..., w n ] to generate an output of hidden representations a BILSTM can be seen as BILSTM In this work we will stack two layers of BILSTMs before decoding the output. | neutral |
train_94075 | ELMo CI uses only the context-insensitive character embeddings produced by ELMo. | a model trained with 25 codes 6 achieves greater than 60% recall at labeling the ground truth trees for WSJ-10, indicating the codes represent some syntactic patterns although not as effectively as when using K-means. | neutral |
train_94076 | This suggests that 1,000 utterances are not enough to achieve optimal results. | the baseline model tags only three items in our test data as reparandum, while the finetuned model identifies thirty-one such instances. | neutral |
train_94077 | Training only using 1,000 annotated in-domain ConvBank data reaches 75.65% and 58.29% accuracy on unlabeled and labeled edges respectively. | models trained on datasets based on news articles and web data do not perform well on spoken human-machine dialog, and currently available annotation schemes do not adapt well to dialog data. | neutral |
train_94078 | First, we reinterpret the ad-hoc tree score in previous spanbased parsing work (Stern et al., 2017;Gaddy et al., 2018;Kitaev and Klein, 2018) as a joint distribution over the labels. | global decoding methods (Durrett and Klein, 2015;Lee et al., 2016) can incorporate non-local features and decode a tree with the maximum global score. | neutral |
train_94079 | With capsule networks, the words in a context source sentence is taken as lowlevel capsules and the information of different perspectives is treated as high-level capsules. | (2017) to build the partwhole relationship through the iterative routing procedure. | neutral |
train_94080 | In this method, the context sentences are considered in the form of hierarchical attentions retrieved by the current generation. | we take the sentence-aligned document-delimited News Commentary v11 corpus 2 as our training set. | neutral |
train_94081 | We found that the growth of Meteor and BLEU scores are opposite. | we choose three historical sentences as our final setting. | neutral |
train_94082 | Results on the sequence-level Transformer and our DocNMTs show that the captured contextual features provide helpful semantic information for enhancing the translation quality. | the choice of utilizing how many historical sentences is a trade-off between both scores. | neutral |
train_94083 | However, fine-tuning requires training and maintaining a separate model for every language, for every domain. | bLEU scores are computed on the checkpoint with the best validation performance, on true-cased output and references. | neutral |
train_94084 | While LHUC is competitive with full fine-tuning and light-weight adapters for extremely small fractions, the lack of capacity limits the applicability of the approach when larger quantities of adaptation data are available. | experiments on domain adaptation demonstrate that our proposed approach is on par with full fine-tuning on various domains, dataset sizes and model capacities. | neutral |
train_94085 | As the state-of-the-art Transformer model is welldesigned in the model structure, an analysis of the integration in Transformer is thus necessary. | and the context information assigned to the j th word in the sentence S i is detailed as: where α i,j is the attention weight assigned to the word and d ctx i,j is the corresponding context information distributed to the word. | neutral |
train_94086 | Experimental results on two Chinese reading comprehension dataset CMRC 2018 (Cui et al., 2019) and DRCD (Shao et al., 2018) show that by utilizing English resources could substantially improve system performance and the proposed Dual BERT achieves state-of-the-art performances on both datasets, and even surpass human performance on some metrics. | the settings of the proposed approaches are listed below in detail. | neutral |
train_94087 | Specifically, we first tokenize the question and Figure 2: An illustration of MTMSN architecture. | the passage using the WordPiece vocabulary (Wu et al., 2016), and then generate the input sequence by concatenating a [CLS] token, the tokenized question, a [SEP] token, the tokenized passage, and a final [SEP] token. | neutral |
train_94088 | This paper considers the reading comprehension task in which some discrete-reasoning abilities are needed to correctly answer questions. | we also provide an in-depth ablation study to show the effectiveness of our proposed methods, analyze performance breakdown by different answer types, and give some qualitative examples as well as error analysis. | neutral |
train_94089 | The impressive performance on the negation type confirms our judgement, and suggests that the model is able to find most of negation operations. | various kinds of prediction strategies are required to successfully find the answers. | neutral |
train_94090 | For instance, they could not successfully transfer models between technical and other nontechnical domains. | hyperparameters, and evaluation measures. | neutral |
train_94091 | A counterexample is Quora, which only contains question titles. | this can be beneficial for a large number of cQA forums that do not contain enough annotated duplicate questions or question-answer pairs to use existing training methods. | neutral |
train_94092 | MQAN also includes pointer-generator networks (See et al., 2017), which allow it to copy tokens from the input text depending on the attention distribution of an earlier layer. | if not otherwise noted, we use the same number of training instances as in the original training splits with duplicates. | neutral |
train_94093 | There are several variants of GCCA (Kettenring, 1971); we follow Bach and Jordan (2002) and solve a multiview version of Equation 1: For stability, we add τ σ j I j to every covariance matrix Σ j,j , where τ is a hyperparameter (here: τ = 0.1), I j is the identity matrix and σ j is the average variance of x j . | 1 To overcome this problem, two avenues have been explored: The first is supervised domainadversarial training on a label-rich source forum (Shah et al., 2018), which works best when 10 0 10 1 10 2 10 3 10 4 10 5 10 6 #D #Q source and target domains are related. | neutral |
train_94094 | Recall that we restricted the choice of source domains to the 12 CQADupStack forums. | the second is unsupervised DQD via representation learning (Charlet and Damnati, 2017;Lau and Baldwin, 2016), which requires only unlabeled questions. | neutral |
train_94095 | We are grateful to the anonymous reviewers for their helpful feedback. | (1) Compression We train this model using the Adam optimizer (Kingma and Ba, 2014) and binary cross-entropy loss, with accuracy as the training metric. | neutral |
train_94096 | This is because our approaches are centered around word usage. | to verify whether human evaluators are in agreement with our characterization model, we conducted a user study using Mturk (Amazon, 2005). | neutral |
train_94097 | By choosing perceived offensiveness and discrepancies in perception as proxy measures for MAS, we hope to shift studies of MAS from a judgment call on an individual's intent to a description of how it affects its targets, which can be described as an objective measure of human behavior that can be validated through study. | this moderate agreement is on-par with other difficult annotation tasks, such as those for connotation frames (Rashkin et al., 2016), which had 0.52 percentage agreement for rating the polarity of the sentence towards a target, dimensions of social relationships (Rashid and Blanco, 2017), which had κ val- ues as low as 0.59, and Word Sense Disambiguation (Passonneau et al., 2012), which reported α values for some words below 0.30 at determining meaning. | neutral |
train_94098 | On the other hand, methods similar to the proposed model could be developed for such purposes and not shared with the broader community. | (b) Embeddings obtained using AM loss. | neutral |
train_94099 | For the purpose of training the embedding f θ we compose it with a discriminative classifier g φ : R D → R Y with parameters φ predicting the author of an episode, where Y is the number of authors in the training set. | prior work in this area has primarily focused on classifying an author as a member of a closed and typically small set of authors (Stamatatos, 2009;Schwartz et al., 2013;Shrestha et al., 2017). | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.