id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_94700 | For a given change B, the input to condition inference is the set of pairs u, k , where u k is a phone trigram in some underlying form and the label k can be positive ( ), negative (⊥), or unknown (?). | by contrasting examples in the first column, we infer that the insertion happens when the suffix /z/ occurs after a strident (like /s/ in /mIs/); otherwise, /z/ and /d/ are devoiced whenever they occur after a voiceless obstruent (like /p/ in /zIp/). | neutral |
train_94701 | baselines on constructed Chinese character variation graph to get graph based character embeddings. | the formulations of the gate function are listed as follows: , and , the output can be: where ω is the scale parameter, and s l is a weight parameter for the combination of each layer, which can be learned through the training process. | neutral |
train_94702 | Given the context of mention m, we form its representation from involved contextualized word vectors with a mention-aware attention mechanism where C is the number of contextual words, and a c i is defined as a c i = Softmax(e c i ), where where ⊕ represents concatenation, and v c ∈ R da and W c ∈ R da×(2dr+1) are trainable parameters. | our Model* is a variant where we replace ELMo embeddings with Bert (large cased model) contextualized word representations. | neutral |
train_94703 | Although they occur in the same sentence, our model is able to focus on different context words for different mentions with the mention-aware attention mechanism. | we calculate the attention for context words in a mention-aware manner, allowing the model to focus on different parts of the sentence for different mentions. | neutral |
train_94704 | (2018c) introduced a new dataset called Multi-Axis Temporal RElations for Start-points (MATRES). | the proposed Siamese network (Fig. | neutral |
train_94705 | Our model is trained with Adam (Kingma and Ba, 2014). | the underlying assumptions behind most NER systems are that an entity should contain a contiguous sequence of words and should not overlap with each other. | neutral |
train_94706 | The underlying assumptions behind most NER systems are that an entity should contain a contiguous sequence of words and should not overlap with each other. | • Empirical results show that our system achieves a significant improvement compared with previous methods, even in the absence of external features that previous methods used. | neutral |
train_94707 | Since their method directly predicts a relation label for each surface pattern, it is more robust to the sparsity of surface patterns among a specific higher arity entity tuple. | player (Andre Johnson, Texans, 2009 season). | neutral |
train_94708 | 4 Here, all entities in each candidate tuple (e 1 , ..., e n ) are mentioned in the same text section T in a given set of documents. | these methods train the neural networks in a supervised manner using distant supervision (Mintz et al., 2009) and, therefore, may suffer from the lack of sufficient positive labels when a well-populated knowledge base is not available. | neutral |
train_94709 | After that, u is used to compute the probabilities of it being a valid name for each type: where s is the sigmoid function and k th dimension of O G u indicates the probability of u being a valid mention of type y k . | by capturing mention regularity entailing gazetteers, the region-based models can be enhanced with more accurate name knowledge, and thereby the need of fully-annotated training data can be reduced. | neutral |
train_94710 | To this end, this paper proposes Gazetteer-Enhanced Attentive Neural Networks (GEANN), whose architecture is shown in Figure 1. | for example, a region encoder should know "George Washington" is a valid PER name because "George" is a common first name and "Washington" is a common last name. | neutral |
train_94711 | Thus, we used a random sample of 105 articles (for each year 5). | another goal is to train a neural network with a larger and better balanced training set to use the model to study a larger corpus. | neutral |
train_94712 | From a humanities perspective, a VA constitutes an interesting phenomenon of enculturation (Holmqvist and Płuciennik, 2010) that deserves to be studied more in-depth, based on larger corpora. | attributing a particular property to a person by naming another person, who is typically wellknown for the respective property, is called a Vossian antonomasia (Va). | neutral |
train_94713 | The best approach, using a BLSTM, reached a precision of 86.9% and a recall of 85.3%. | we can only determine precision and recall of our new approaches based on the baseline data set described in Section 3. | neutral |
train_94714 | Then, the model calculates the j-th attention vectorŝ j using the context vector o j : where W e is a weight matrix and tanh is the hyperbolic tangent function. | this section describes the NER model using a character-level NLM (Akbik et al., 2018), which is our baseline comparison with our approach. | neutral |
train_94715 | For r ∈ {r 1 , ..., r N }, o r is calculated by averaging, Table 2: Accuracies (%) on few-shot DA. | they are from the same domain, yet in a real-world scenario, we might train models on one domain and perform few-shot learning on a different one. | neutral |
train_94716 | These results contain two types of errors: idiosyncratic casing in the gold data and failures of the truecaser. | prior solutions have included models trained on lowercase text, or models that automatically recover capitalization from lowercase text, known as truecasing. | neutral |
train_94717 | (2003) proposed a statistical, wordlevel, language-modeling based method for truecasing, and experimented on several downstream tasks, including NER. | experiment 5 shows that if the training data is also truecased, then the performance is good, especially in situations where the test data is known to contain no case information. | neutral |
train_94718 | Fact is a proposition describing objective facts that can be verified using objective evidence and therefore captures the evidential facts in persuasions: Empire Theatres in Canada has a "Reel Babies" showing for certain movies. | revealing the role of EUs no significance is observed in a positive vs. negative case in Table 2. | neutral |
train_94719 | Restaurant ACC F1 pos F1 neg ACC F1 pos F1 neg ACC F1 pos F1 neg NBSVM-uni (Wang and Manning, 2012) From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NETAB outperforms CNN. | following (Kim, 2014), We also randomly select 10% of the test data for validation to check the model during training. | neutral |
train_94720 | (2019) proposed a joint model that determines stance and sentiment simultaneously. | finally, we show that the proposed AT-JSS-Lex model achieves remarkable improvements in performance over strong baselines and prior works on the SemEval-2016 stance detection dataset. | neutral |
train_94721 | Based on massive amounts of data, recent pretrained contextual representation models have made significant strides in advancing a number of different English NLP tasks. | ruder and Plank (2018) propose a novel multitask tri-training method that reduces the time and space complexity of classic tri-training for sentiment analysis. | neutral |
train_94722 | (2017), researchers have developed a variety of methods for learning the structured/unstructured parts (i.e., style/content) of latent representations. | we use 512 hidden states and 128-dimensional word vectors. | neutral |
train_94723 | This shows how, in contrast to the state of the art (Kozlowski et al., 2018;Garg et al., 2018;Lauretig, 2019;Zhao et al., 2018), informative priors allow easy integration with other, more complex, probabilistic models. | lack of interpretability and the unsupervised nature of word embeddings have limited their use within computational social science and digital humanities. | neutral |
train_94724 | This conclusion is based on the observation that a diagnostic classifier trained over the supposedly debiased data representations could still predict gender, age and race above chance level in their experimental setup. | the mean of sentences extracted from the model trained with adversarial training is closest to the neutral value 5, but independent ttests show that the differences between all classes are insignificant (p > 0.05). | neutral |
train_94725 | We also construct an artificial dataset from PAN16 TWIT where the main task labels are preserved but the demographic label is randomly shuffled (PAN16 RAND), allowing us to run experiments with no PREVALENT or SAMPLE-SPECIFIC gender correlations, only ACCIDENTAL. | the results show that humans had difficulties determining the gender of the author. | neutral |
train_94726 | • A novel deep learning framework augmented with socio-linguistic features to detect sarcasm targets in sarcastic texts. | to the best of our knowledge, only Joshi et al. | neutral |
train_94727 | For instance, in aspect based sentiment analysis, which deals with the identification of sentiment expressed toward different aspects or dimensions of the entities present in the text, it is very important to identify the sarcasm targets and sentiments toward them in the texts. | (2017) present a compilation of past works including the datasets, approaches, issues and trends in automatic sarcasm detection. | neutral |
train_94728 | (2011) and the WNBA and NBA basketball commentaries of Aull and Brown (2013), but we emphasize that FOOTBALL is the first large-scale sports commentary corpus annotated for race. | such prior scholarship forms conclusions from small datasets 1 and subjective manual coding of race-specific language. | neutral |
train_94729 | It works both at a character-and a word-level, thereby effectively handling incomplete or missing words. | the predicted sequence alters accordingly to ἀρτεμιδώρ, thereby illustrating the importance of context in the prediction process. | neutral |
train_94730 | To simplify comparisons, all AG accentuation was discarded, as inputting accents was timeconsuming for the human evaluations described in the following paragraph. | we provide a set of the Top 20 predictions decoded using beam search. | neutral |
train_94731 | While Yang et al. | to show the effectiveness of the lexical centrality of our tensor embedding method, we conduct an experiment on SemEval 2017 task 6B (Potash et al., 2017) consisting of tweeted responses to specific thematic prompts generated as part of a tV show. | neutral |
train_94732 | In this way, we can rank the degree of humor effectively via lexical centrality (Radev et al., 2015), namely, regarding the distance to the lexical center as an indicator of the degree of humor. | their system uses an n-gram language model trained on a 6.2GB subset of the News Commentary Corpus and the News Crawl Corpus. | neutral |
train_94733 | Adam (Kingma and Ba, 2014) is used for model optimization. | first, the interactions between words in news title are important for understanding the news. | neutral |
train_94734 | Massive news articles are generated everyday and it is impossible for users to read all news to find their interested content (Phelan et al., 2011). | in this paper, we propose a neural news recommendation approach with multi-head selfattention (NRMS). | neutral |
train_94735 | Family accounts for 42% of the dataset. | with the rapid growth of social media applications such as Facebook and Twitter, a significantly increasing number of individuals are using these social media public platforms to release humorous texts. | neutral |
train_94736 | We can find that sentences with high probability are not always highly rated by the human evaluations. | human evaluation results show that the proposed method can generate significantly more natural anagrams than baseline methods. | neutral |
train_94737 | It is also impractical if applied to the anagram generation task. | the generated anagrams contain natural sentences. | neutral |
train_94738 | Table 4 shows the results of trying to generate fa-mous anagrams. | many anagram generation software and web services exist including the Internet Anagram Server 2 and the Anagram Artist 3 . | neutral |
train_94739 | In this setting, we want to incorporate the consistency between the claim (C) and perspective (P ) representations. | for example, given the claim "Make all museums free of charge" is opposed by the perspective "State funding should be used elsewhere". | neutral |
train_94740 | However, our model is not able to capture that the negation phrase 'do not require' opposes the claim. | there is an abundance of contentious claims on the Web including controversial statements from politicians, biased news reports, rumors, etc. | neutral |
train_94741 | Our consistency-aware model BERT CONS outperforms all the other baselines. | our model is not able to capture that the negation phrase 'do not require' opposes the claim. | neutral |
train_94742 | These text classification systems were chosen because both are commonly used in industry. | the classifiers are calibrated on the development set to a precision close to 0.90 and maximum recall. | neutral |
train_94743 | For a patient's length of ICU stay of T hours, we have time series observations, x t at each time step t (1 hour interval) measured by instruments along with doctor's note n i recorded at irregular time stamps. | pre-trained word and sentence embeddings have also shown good results for sentence similarity tasks (Chen Figure 1: Doctor notes compliments measured physiological signals for better ICU management. | neutral |
train_94744 | Davis and Goadrich (2006) suggest AUCPR for imbalanced class problems. | we dropped all clinical notes which doesn't have any chart time associated and also dropped all the patients without any notes. | neutral |
train_94745 | Analysis : In Table 3, we analyze our models on Politics, the largest dataset. | the Instant Messaging systems (e.g., Slack) often require users to manually organize messages in threads. | neutral |
train_94746 | For Bulgarian, the best results are obtained by Pilot 0, followed by Pilot 1. | in the last row e, we report the percentage of non-tying comparisons where Pilot 2 was judged better than Pilot 0, that is, P2 better ignoring ties than P0 (%) equals #(P2 better than P0) #(P2 better than P0) + #(P2 worse than P0) × 100% Figures 3 and 4 provide a graphical representation of Tables 10 and 11, respectively. | neutral |
train_94747 | The statistics regarding the selected entries are shown in Table 6. | 1 BabelNet has been applied in many natural language processing tasks, such as multilingual lexicon extraction, crosslingual word-sense disambiguation, annotation, and information extraction, all with good performance (Elbedweihy et al., 2013;Jadidinejad, 2013;Navigli et al., 2013;Ehrmann et al., 2014;Moro et al., 2014). | neutral |
train_94748 | 1 BabelNet has been applied in many natural language processing tasks, such as multilingual lexicon extraction, crosslingual word-sense disambiguation, annotation, and information extraction, all with good performance (Elbedweihy et al., 2013;Jadidinejad, 2013;Navigli et al., 2013;Ehrmann et al., 2014;Moro et al., 2014). | babelNet is both a multilingual encyclopedic dictionary, with lexicographic and encyclopedic coverage of terms, as well as a semantic network comprising 14 million entries which connects concepts and named entities in a very large network of semantic relations. | neutral |
train_94749 | In (Saif et al., 2014) word contexts are adopted to generate sentiment orientation for words. | here, we compare the contribution of the DPLs with a well-known lexicon, i.e. | neutral |
train_94750 | Notice that the Best-System 11 here reported is measured over the full test set. | the approach is based on the possibility to acquire comparable representations (e.g. | neutral |
train_94751 | We compare the DPLs with another Italian polarity lexicon, called SENTIX in (Basile and Nissim, 2013). | sA deals with the problem of deciding whether a portion of a text, e.g. | neutral |
train_94752 | In our case, a topic corresponds to a layer of the multiplex network and the nodes represent the users (Boccaletti et al., 2014). | there are communities that are only present in some topics, such as communities 3, 7, 8, 11, 12 or 19. | neutral |
train_94753 | In (Hatzivassiloglou and McKeown, 1997) the authors model the corpus as a graph of adjectives joined by conjunctions. | to capture in-domain word polarities smaller domain focused dataset might work better (García-Pablos et al., 2015). | neutral |
train_94754 | For this purpose, we employ leave-one-out within the languages, for which the value of the target feature is recorded in WALS. | in training, the dependent features are not used. | neutral |
train_94755 | We first show the estimation accuracy for each feature both for the trained classifier and the majority baseline in Figure 3. | we (i) regard one such language as a test instance and the remaining languages as training instances, (ii) represent both the training and test instances with the features other than the target feature, (iii) see whether the value of the target feature in the test instance is correctly estimated or not, (iv) iterate this process for all those languages to calculate the estimation accuracy. | neutral |
train_94756 | The goal of this paper is firstly to present a thorough analysis of the adequacy of currently used taggers for historical Dutch and secondly to explore methods for generating higher accuracy tags. | using a dictionary derived from a parallel corpuswhich finds a translation for the words in the corpus, rather than merely respelling them -results in an accuracy of 0.91 for within domain text, but does not generalise well to different domains (see Table 3). | neutral |
train_94757 | p < 0.05) are bolded. | the corpus was built in the framework of an interdisciplinary study jointly carried out by computational linguistics and experimental pedagogists and aimed at tracking the development of written language competence over the years and students' background information. | neutral |
train_94758 | the average occurrences of these errors made by students born in Italy and abroad, we can claim that students born abroad make more errors than their mates in both the first and second year (see Table 8). | interestingly, the statistical distribution of some typologies of errors is correlated with the student background information we collected. | neutral |
train_94759 | This model was trained with all possible features available (including those derived from manual annotations) from each four turn snippet taken into account. | an example of a snippet provided for annotation is shown in Table 1. | neutral |
train_94760 | In all experiments, radial basis function kernels has been used. | the reported research is partly funded by the EU FP7 Metalogue project, under grant agreement number: 611073. | neutral |
train_94761 | The Italian LUNA Human-Human Corpus ) is a collection 572 dialogues in the hardware/software help desk domain. | due to the greater drop in cross-domain evaluation of the 'legacy' models and better in-domain performance of the mapped ISO annotation, we conclude that the transfer of legacy annotation to the ISO standard is beneficial. | neutral |
train_94762 | We observe that half of the chains have only two mentions, and that roughly 5.7% of the chains gather 10 mentions or more. | coreference resolution on biomedical texts took its place as an independent task in the BioNLP field; see for instance the Protein/Gene coreference task at BioNLP 2011 (Nguyen et al., 2011). | neutral |
train_94763 | words (Word2vec rare) the baseline is significantly outperformed for one parameter set, while the performance for the other parameter sets confirms this tendency. | as this study also indicates, this varying constitution of test sets may lead to very different results testing a model on them. | neutral |
train_94764 | The sieves are described next: 1. | note that this evaluation considers solely named entities (whereas our adapted system produces mixed chains, our gold reference contains only named entities). | neutral |
train_94765 | Clark and Manning (2015), Durrett and Klein (2014) or Björkelund and Kuhn (2014). | ziering (2011) improved the scores of SU-CRE by integrating linguistic features. | neutral |
train_94766 | In (Dakwale et al., 2012) scheme there is no provision to mark a relation between mentions of a chain, while MUC scheme 3 [1] has provision to mark limited relation types between mentions. | when a pronoun refers to time or time referring/representing an event or a clause in given discourse, for that we decided to mark "Anaphora-T" relation (T stands for Temporal) between those two mentions. | neutral |
train_94767 | Most of the variables useful for open-domain pronoun resolution are irrelevant here, particularly gender, person, and animacy, since the entities and events mentioned are invariably referred to as neuter, 3 rd person, and inanimate in English (it, its, them, etc.). | finally, domain general systems are able to make assumptions that do not hold in this domain. | neutral |
train_94768 | This violates the system's general assumption that a full antecedent will precede the anaphor, and by several paragraphs. | for example, a phosphorylated ASPP2 protein matches the ASPP2 and the phosphorylated protein but not the activated ASPP2. | neutral |
train_94769 | Moreover, although RoR-siRNA alone was able to increase the p53 level, it did not cause p53 phosphorylation or acetylation. | linking full mentions as referring to the same real-world entity constrains later sieves and aids in assembly. | neutral |
train_94770 | Error categories should be adapted to be more user-oriented than research-oriented: the number of categories should be limited, in order to focus on central error types only; the phrasing of categories should be changed in order to be usable by trainees with a less extensive knowledge of syntactic concepts and jargon; remediation possibilities should be systematically included and diversified in order to provide a clearer illustration of error types. | #NAME? | neutral |
train_94771 | • not to correct author's mistakes. | in the case of 45 students, more than one essay is available which could provide interesting material for learner profiling. | neutral |
train_94772 | SW1203 contains in total 144 essays, where 35 students have written all the 3 essays. | annotating learner data manually is an extremely time-consuming and costly enterprise. | neutral |
train_94773 | Alderson (2007) agrees with Little to say that "the methodologies being used [to compile these descriptions] are unclear or suspect". | as was the case for the coverage approach of Laufer and Ravenhorst-Kalovski (2010), this list only describes a global learning goal and does not contain any indications for appropriateness at different levels of learner proficiency. | neutral |
train_94774 | In addition, many existing libraries define a complex class hierarchy, making it difficult for some users to use or adapt the modules. | most of these libraries are not designed specifically for NLP tasks. | neutral |
train_94775 | This could be attributed to the fact that the rule to transform the FLELex distributions into a single level (i.e. | as for the lexical words, the model was less accurate, having correctly classified between 81.1% and 91.3% of them. | neutral |
train_94776 | the A1 level, which was none different in the 51-text sample (Figure 3). | most studies on word substitution have relied upon the combined use of a synonym database and a strategy to rank substitution candidates by difficulty. | neutral |
train_94777 | We can thus safely state that a graded lexical resource such as FLELex could be used to correctly distinguish between the complexity of two synonyms and hence to automatically substitute complex words with their easier synonyms observed in the resource. | we would like to thank the study participants for the time they spent giving us an essential feedback on their lexical competence, which enabled us to gain some valuable insights. | neutral |
train_94778 | All learners signed an agreement of collecting their data to be used for research purpose in the lab. | in many cases, the human learners were skipping the conditional phrase of the utterance of the robot and gave a response by mimicking the part of the sentence that gives a short answer. | neutral |
train_94779 | It is relatively complex compared to traditional evaluation schemes, by involving an interplay between the developement of systems to be evaluated and of oracles, and can be expected to need a few rounds of evaluation to reach maturity. | this new evaluation protocol for adaptive systems is not only expected to drive progress for such systems, but also to pave the way for a specialisation of actors along the value chain of their technological development. | neutral |
train_94780 | In order to limit biases due to oracles with poor performance, each oracle should be itself evaluated and given more or less weight in the abovementioned average depending on its performance. | this can also be seen as a richness and an opportunity to be representative of real usage. | neutral |
train_94781 | Conversely, if R Comp is low, little is likely to be gained by taking the union of system outputs. | in ablation, it is important to see the differences in system behaviours. | neutral |
train_94782 | Slightly harder is the case of cheating precision. | 1 This is a harmonic weighted mean of precision and recall. | neutral |
train_94783 | When evaluating using these metrics, focus is given to true positives, false positives and false negatives. | this makes the evaluation sets fairly skewed. | neutral |
train_94784 | We split the source corpora at the token level, assigning the same number of tokens to each KSC. | within each KSC set, the number of words in the corpora varies between 111k and 114k. | neutral |
train_94785 | All three platforms were developed and have grown around the data that, sometimes by little more than coincidence, happened to be around at the time of development at the respective sites. | participants were also asked about their knowledge and experience in different areas directly relevant to working with oral corpora. | neutral |
train_94786 | The resulting picture is very diverse: interplatform comparisons (e.g. | however, a noticeable proportion of users revealed that they were either not familiar with the respective functions of the platforms or that they found them difficult to use. | neutral |
train_94787 | There are 3 labels (begin, inside, outside). | lastly, these embeddings outperforms CSlM for all tasks. | neutral |
train_94788 | These embeddings achieve respectively 71.4% and 70.7% of overall accuracy. | in that work, we have shown that the combination with auto-encoders yields significant improvement on the ASR error detection task. | neutral |
train_94789 | The entire Odin framework is available as part of the Processors NLP library. | from the languages that support syntax, Stanford's Tregex matches patterns over constituency trees (Levy and Andrew, 2006). | neutral |
train_94790 | On the other hand, the surface-based grammar misses the ubiquitination event involving "TRAF6" (and the negative regulation of this ubiquitination), because the last two tokens of the sentence are not explicitly handled by the rules. | for brevity, we omit examples with explicit taxonomies in this section. | neutral |
train_94791 | After a manual study of 500 "borderline" documents, a quality threshold of 91% has been experimentally set. | in order to do so, we used a slightly modified version of the reliability computation algorithm presented in section 8. | neutral |
train_94792 | The metadata record comprises the corpus name, year, title and authors. | the number of algorithms is 21 instead of 25. | neutral |
train_94793 | The motivation for developing the lexicon is threefold: firstly, the lexicon is a Dir-w-e-c CBoW PMI Figure 4: Learning curves for the TOEFL, BLESS, and SimLex-999 tests for the three different models (RI-dir-w-e-c, CBoW and PMI) using Wikipedia as data. | an artifact of only using addition for accumulating the distributional vectors in RI is that all vectors will tend towards the dominant eigenvector of the distributional space (this is the case also for standard DSMs built without RI). | neutral |
train_94794 | the difference between these two data. | the difference between these two data. | neutral |
train_94795 | In this paper, we argue that the KTimeML has some limitations, and propose a revised version of the KTimeML. | the domains of the Wikipedia documents are personage, music, university, and history. | neutral |
train_94796 | println (t); } The class "TuplesDb" has also public methods to express many kinds of queries. | the hypernymy relations extraction consist of two phases. | neutral |
train_94797 | They were asked to determine the relevance on a three-point scale: definitely relevant, 8 http://qt21.metashare.ilsp.gr/repository/browse/qtlpenglish-greek-corpus-for-the-medicaldomain/665f3832a93211e3b7d800155dbc020119068d540 possibly relevant and not relevant. | the average precision was also higher than the average recall for both tools. | neutral |
train_94798 | There are also connections with research on health and medicine (cancer development, brain development, lymphocyte development, vaccine development). | the #Likelihood method (329 occurrences) measures the associative strength of terms thus enhancing the exclusive links with other terms, while the #Cosine similarity method (934 occurrences) analyses the similarity on the basis of the occurrences they have in common. | neutral |
train_94799 | Moreover, "The Commission shall make available the information communicated by Member States to the European Parliament and the Council and shall ensure that it is also available to consumers and suppliers who request it." | as a next step, we will concentrate on the most significant extension of our linguistic coverage, and start with the inclusion of the surface syntactic representations of Duty elements by means of subcategorised finite and infinite clauses as noted in section 4.2. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.