id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_99700 | Although many useful smoothing techniques are developed to estimate these unseen sequences, it is still important to make full use of contextual information in training data. | a number of useful smoothing techniques such as back-off (Katz,1987), Kneser-Ney (Kneser & Ney,1995), modified Kneser-Ney (Chen & Goodman,1999) have been developed to estimate the probabilities of unseen sequences. | neutral |
train_99701 | These can be separated into classes PACLIC 28 ! | here a c denotes the addressee of the current context. | neutral |
train_99702 | This language has a number of means for indicating politeness. | the particles can combine with other terms with honorific content, and need not match them perfectly in register, as will be shown in the next section. | neutral |
train_99703 | c. Nothing else is a macro-context. | wa-marked declaratives can be rendered into interrogatives without any problem as in (11). | neutral |
train_99704 | ⟨w3, w3⟩ ⟨w4, w4⟩ s1: If these extra world pairs are added to both blocks of the partition specified in the derived context, then the resulting relation does not obey mutual exclusivity, as illustrated in (37). | the difference is the speaker's intention in the discourse. | neutral |
train_99705 | In the current case, however, the act is a question (i.e., an inquisitive update), creating a partition over those multiple contexts, as depicted in (48). | in (15), the speaker is indicating that she is willing to make assertions only about John and the alternative speech acts about other individuals are cancelled. | neutral |
train_99706 | We observed the attributes in the conversation and calculated the appearance probability of the attributes. | we cannot divide the chunk of utterances as independent conversations. | neutral |
train_99707 | Second, the system confirms whether the terminal expression of the verb will degrade the translation quality. | the input utterance "もっと買わなきゃだ (It is necessary to buy it more)" is processed by the four steps and then the untranslatable colloquial expression "買わなきゃだ" is detected. | neutral |
train_99708 | The acceptability did not change in 1,163 (52.2%) utterances, and the acceptability decreased in 60 (2.7%) utterances. | our research proposed a procedure for detecting the untranslatable colloquial expressions automatically. | neutral |
train_99709 | Not every sentence marked with these categories is a comma splice since the error tags cover other error types as well, e.g., fused sentences, missing conjunctions for NPs, missing complementizer "that", etc. | we did not include comma splices introduced by fusing sentences together. | neutral |
train_99710 | Thing of result only if do finish complete then know "The result can only be known after you have finished doing it." | and this is exactly the function of post-preterite tense in English. | neutral |
train_99711 | In (9a) the focus is on the fact that the sixth bottle is empty, while in (10a) the focus is on the burst of the balloon. | he claims that tense is a concept related to both speech event and narrated event. | neutral |
train_99712 | Since "verb+guo2" represents an event as a whole rather than a process of realization, "x" and "a" coincide in the diagram. | "guo1" is the unmarked category, giving no specific information about the time when the verbal action takes place, while "guo2" is the marked category signifying the sense of "happened in the past". | neutral |
train_99713 | (2) A language L favors N1 attachment if: a. L has no alternative construction for expressing the N1 interpretation (Frazier and Clifton, 1996); b. L has flexible word order (Gibson et al., 1996); c. L allows constituents (e.g., adverbs) to intervene between a verb and its direct object (Miyamoto, 1999); d. L exhibits consistent use of relative pronouns (Hemforth et al., 2000); e. L has pseudo-RCs (Grillo, 2012); f. L allows constituents (e.g., adjectives) to intervene between the modified noun and the RC (schematically: N adjective RC, the modifier-straddling hypothesis, MSH, Cuetos and Mitchell, 1988). | the most lenient languages such as Romance languages have both triggers. | neutral |
train_99714 | Wald chi-square was used to calculate p-values (function Anova in the package car; Fox and Weisberg, 2011). | in order to measure the influence of the context surrounding the complex NP, tokens were classified according to whether information inside the complex NP was enough to determine attachment (internally disambiguated; e.g., voice of men that was uttered) or whether it was also necessary to consult the context surrounding the complex NP (externally disambiguated). | neutral |
train_99715 | Segments with thî: preceded by khɔ̌:ŋ "of" within a three-word window were extracted from the six writing genres of the Thai National Corpus (Aroonmanakun et al., 2009), namely fiction (which contains 7,469,530 words), newspaper (5,029,019 words), academic text (8,894,650 words), nonacademic text (5,342,092 words), law (1,190,516 words) and miscellanea (4,000,160 words). | the corpus count suggests that surrounding context can play a role in RC attachment and the bias is often but not exclusively to N1. | neutral |
train_99716 | the "world" set), then the meaning of "students" is regarded as a set student ✓ W containing all entities being students, and the meaning of "(someone) likes noodles" is regarded as a subset of W which contains all entities who like noodles. | instead of numbers in the original DCS tree to denote different dimensions (i.e. | neutral |
train_99717 | Or even if he does the shop assistant does not detect his expectation about the vase. | this paper addresses the influences of common ground, context and information structure on the linguistic production and interpretation processes with a special reference to counter-expectation in thai. | neutral |
train_99718 | This means that: 1) B had a prior expectation, ✏ M ( 3 ) ✏ + M p; 2) A's first assertion indicates that p is no longer supported by the expectation state, This contradicts the secondary component of the semantics of A-NOT-A question, Table 4, among the four Cantonese polar questions considered in this paper, only AA4 questions are simplex questions while HO2, ME1 and A-NOT-A questions have multidimensional semantics. | this paper thus offers a solution to this problem in the framework of inquisitive semantics (Groenendijk & Roelofsen, 2009), where meaning of sentences are given based on support-conditions. | neutral |
train_99719 | Yuan & Hara (2013) argue that the assertion of 'p or not p' in effect indicates the ignorance of the speaker, hence the neutrality requirement. | yuan & Hara (2013) argue that the assertion of 'p or not p' in effect indicates the ignorance of the speaker, hence the neutrality requirement. | neutral |
train_99720 | In addition, they seem to impose different restrictions of animacy on the object B. | given this, the agentive fan is to a certain extent similar to the verb start in English. | neutral |
train_99721 | That is, the [-m] role, as an under-specified role, cannot bear the ACC feature. | the disparity mainly lies in the mental use of fan "be annoyed", which is not only unattested in the corpus of taiwan Mandarin but also reported as weird by our informants. | neutral |
train_99722 | It should be further noted that the above mentioned inanimate entities can never function as the subject of the causative fan "annoy". | the latter use of fan, as that in (6), is unattested in Taiwan Mandarin. | neutral |
train_99723 | Metonymy, on the other hand, refers to a process which uses a salient entity that is easy to understand as the referent point that links to a less salient entity (Langaker, 1999). | lakoff and Johnson (1980) state that conceptual metaphor is a language phenomenon in which a speaker understands a particular concept through the use of another concept. | neutral |
train_99724 | it shows the words that have a statistically significant association with the keyword. | section 2 provides some background about corpus-based discourse analysis and discusses some limitations of the automated techniques that are commonly used. | neutral |
train_99725 | Then, the context similarity for each extracted noun and the noun in the input sentence is In this method, we hypothetically define the prephrase and post-phrase of the target noun as the context; nouns used in a similar context are extracted from the corpus. | they regard English word 5-gram as one phrase, and they generate feature vectors using pointwise mutual information (PMI) scores. | neutral |
train_99726 | However, in this study, we evaluate the performance of our generation method independently from the quality of automatic word or phrase alignment algorithms. | in our study, the class for SVM is each phrase in a paraphrase set, which contains a source phrase. | neutral |
train_99727 | This approach constructs k(k-1)/2 classifiers, where k is the number of classes, with a training data set from two classes. | these methods tend to encounter two problems when applied to agglutinative languages (e.g., Korean, Japanese, and Turkish), which are morphologically rich languages. | neutral |
train_99728 | One may think that it is sufficient to simply enumerate frequent-enough words or N -grams and use them as a whitelist to avoid suppressing their occurrences. | lCP i is defined as the length of the maximum common prefix of two lexicographically neighboring suffixes T SA i 1 ..n and T SA i ..n , where T is the string in concern and SA is the suffix array of T . | neutral |
train_99729 | 558913) and the Newly Recruited Junior Academic Staff funding of the Hong Kong Polytechnic University (Grant Account A-PL27). | finally, we will briefly discuss the contribution and future directions of this line of research. | neutral |
train_99730 | We restrict to Pattern 1 in reconstructing of source bigrams because this Pattern contains more information of context and crossing-language alignment. | then we translate unseen bigrams based on proportional analogy and filter the outputs using an Support Vector Machine (SVM) classifier. | neutral |
train_99731 | Challenging not only because speech recognition and speech synthesis are involved, but also because of the lack of dialectal Arabic parallel corpora. | at the best of our knowledge, PaDIC, up to now, is the largest corpus in the community working on dialects and especially those concerning Maghreb. | neutral |
train_99732 | The experiments for the random baseline were performed 1,000 times. | we eventually used the following transition probability parameter to avoid the zero frequency problem: where α a denotes a transition probability parameter where all the leaf nodes have the same amount probability and α s b denotes the transition probability parameter that is pre-trained using the above equations. | neutral |
train_99733 | If so, all the probabilities of all the paths from the root node to each leaf node would have been the same. | in addition, we deleted word senses that appeared only once through pre-processing. | neutral |
train_99734 | Neg_Sentiment_Fea is feature measures shallow sentiment of sentences, we manually chose a list NEG_SENTIMENT = {'no ', 'not', 'never', 'little', 'few', 'nobody', 'neither', 'seldom' 'hardly', 'rarely', 'scarcely'} to judge the sentiment, the appearance of word in this list indicating an opposed meaning, if only one word in the list appeared only once in this pair of sentences, we think that this pair of sentences expressing opposite meaning. | in our experiments, we also used plenty of style-related features, we call it "literal-based" features. | neutral |
train_99735 | In terms of implementation, we used Scikit-learn 5 toolkit (Pedregosa, Varoquaux et al., 2011) to do the classification and the parameter settings for three SVR models are shown in the following table, we chose these parameters by experiences, Clf-2 and Clf-3 used the same setting, and a better result may be achieved through fine tuning: 'rbf' 0.16 100 0.1 Table 3 Parameter settings of our three classifiers After the prediction of the similarity scores of sentences, we conducted a post-processing step to boost and correct results, we truncate at the extre- Table 4 reported the results of our method on SemEval 2015 Task 2a, from which we can know that our method outperformed the winning system by a big margin on the headlines, but only slightly better on the images. | we can get vector representations 1 and 2 of the two sentences. | neutral |
train_99736 | Secondly, what's more difficult is to recruit highly diverse subjects due to the difficulties in subject recruitment and the spacial limitations of laboratory-based experiments. | we also investigated the uncertainty of semantic transparency judgment among raters, we found that it had a regular relation with semantic transparency magnitude and this may further reveal a general cognitive mechanism of human judgment. | neutral |
train_99737 | Since in our compound stimuli, there are completely transparent compounds and completely opaque compounds, so ideally, two kinds of results should share and cover the same scale from 0 to 1. | the adjusted normalization method has an advantage over the standardization method, that is the adjusted normalization will yield results from 0 to 1 and this scale is accord with the definition of semantic transparency value. | neutral |
train_99738 | Input sentences are split into words by a morphological analyzer MeCab 1 (we used this analyzer throughout the paper). | the upper two examples show utterance pairs that were classified correctly by all methods, while the two examples at the bottom were correctly classified by only the (RNN + DP + diff) method. | neutral |
train_99739 | As shown in the above, position feature (PF) contains two elements, and relative-dependency feature (Relative-Dep) contains three elements. | the main challenge is the fact that important information can appear at any position in the sentence. | neutral |
train_99740 | Furthermore, we analyze the performance influenced by other different features, and finally the F1-measure is improved to 0.545. | there are inadequate free texts locally for extracting features, as Freebase is a well-structured knowledge repository with billions of triples. | neutral |
train_99741 | We compute principal component scores for r dimensions by applying SVD to B. | the computation time of APPROX-PMI+PCA was much smaller than that of EXACT-PMI and APPROX-PMI. | neutral |
train_99742 | We form noun phrases whose scores are greater than a threshold. | the similarity computation between patterns is crucial for unsupervised relation extraction. | neutral |
train_99743 | The plots show that, for similar words, the learned weights for the corresponding lexical features are only slightly similar; but after the lexical features are reduced to low-dimensional embedding features, the learned weights for the corresponding features are more strongly correlated. | the plots show that, for similar words, the learned weights for the corresponding lexical features are only slightly similar; but after the lexical features are reduced to low-dimensional embedding features, the learned weights for the corresponding features are more strongly correlated. | neutral |
train_99744 | By appropriate representation learning (usually by back-propagations in neural network), these embeddings can replace traditional sparse features and perform quite well together with neural network. | it is not the case for our neural network as the output layer of our network only has two neurons. | neutral |
train_99745 | Next, we explore the possibility of using a recurrent neural network (RNN) to induce multilingual NLP tools, without using word alignment information. | the combined model is built for each considered language using cross-validation on the test corpus. | neutral |
train_99746 | In Figure 2, the input vector is obtained from Equation 2 and dA1 is applied with the weight matrix of the first layer W 1 to calculate the first hidden layer. | as can be seen from Table 1, the number of nodes has little or no effect on accuracy, whereas changing the number of layers helps to improve the performance. | neutral |
train_99747 | 1-of-K treats different words as discrete symbols. | using an SdA, the result for Bagof-Features was 76.9% and that of distributed word representation was 81.7%. | neutral |
train_99748 | We can speculate that Polish people feel involved in this war, as a Polish engineer was kidnapped and killed by Pakistani extremists. | this may be caused by the following reasons: first, Bosnian Wikipedia is not as widely used as some other languages (ranked 69 in number of Wikipedia articles 11 ), which will result in a limited number of edits on these articles; second, the Civil war in tajikistan has little relevance to the people speaking, e.g., Italian or Portuguese, because of the geographical distance, which may result in limited attention to this topic. | neutral |
train_99749 | They show that surface patterns are much more accurate (from 45% to 85%) than deeper linguistic information. | a window size of five is used to verify whether negations are present. | neutral |
train_99750 | they are incoherent), conflict of the polarity may not indicate the sarcasm or irony. | formally, the polarity feature can be represented as a (key, val) pair, where the key is <pos, dict>, or <dict>. | neutral |
train_99751 | In addition, Sutheebanjard and Premchaiswadi (2010); Lertcheve and Aroonmanakun (2009) mentioned a similar extracting pattern. | the MA set has only the positive and negative sense and does not have the neutral sense of 'stock price'. | neutral |
train_99752 | The training set contains no neutral sentiment. | section 2 reviews related work. | neutral |
train_99753 | In Arabic language, sentiment resources are in general rare. | with the expanding growth of social networks services, user generated content web has emerged from being a simple web space for people to express their opinions and to share their knowledge, to a high value information source for business companies to discover consumer feedbacks about their products or even to decide future marketing actions. | neutral |
train_99754 | This analysis can unify the blocking effect observed not only in Chinese but also in Japanese and Korean, and more clearly accounts for why there is a blocking effect in these languages. | it has been observed in the literature (Clements 1975, Sells 1987, Kuno 1987, Stirling 1993, Pearson 2013, among others) that a logophoric pronoun commonly manifests the three properties listed in (1). | neutral |
train_99755 | That is, the camera is placed at equal distance from both John and Mary. | log-receive gift 'Amai heard from Kofij that shei/hej had received a gift.' | neutral |
train_99756 | It can always have the source as its antecedent. | to (33), on the other hand, the sentence in (34) does not occur in the logophoric environment. | neutral |
train_99757 | The following sentence is compatible with this idea. | it is worth noting that there is no attitude predicate in (27). | neutral |
train_99758 | 10 There are at least two possible ways for (10) to be derived under the analysis in question. | an alternative analysis would reattach the cops to the matrix TP or CP, where the cops could ccommand they. | neutral |
train_99759 | This extraction, however, violates the Sentential Subject Constraint, resulting in the RRC effect. | at this point, the strategy may be successfully applied: The parser integrates John as a subject, postulating a trace in the specifier position of the vP such that the trace can be assigned a theta-role by the verb saw, the theta-role being transmitted through a chain to the subject John. | neutral |
train_99760 | Here, however, the final attachment site in the jar (=β) does not c-command the original attachment site into my mouth (=α); this results in a high-cost reanalysis. | hence, the dislocated NP is properly licensed. | neutral |
train_99761 | RVC pattern is therefore not used in Cantonese corresponding sentences. | rVC-patterns in causatives are strictly prohibited. | neutral |
train_99762 | The idea is that the P-type MSC in (4b) is derived by PR from a single subject construction (SSC) where the first NP is licensed as a Possessor (cf. | these MSCs must be licensed differently. | neutral |
train_99763 | Spaces are used for easier reading and generally put between phrases, but there are no clear rules for using spaces in Khmer language. | first, we trained a CRf model from 5,000 manually segmented Khmer sentences. | neutral |
train_99764 | The operation types are (i) generation of a sequence of source and/or target words (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations which perform the actual reordering. | in all-but-one of the evaluations involving Japanese and Korean the HPB-SMT approach gave rise to the highest scores. | neutral |
train_99765 | Word segmentation is helpful in Chinese natural language processing in many aspects. | because CSLM outperforms bNLM in probability estimation accuracy and bNLM outperforms C-SLM in computational time. | neutral |
train_99766 | Traditionally, Back-off N-gram Language Models (BNLM) (Chen and Goodman, 1996;Chen and Goodman, 1998;Stolcke, 2002) are being widely used for probability estimation. | most of the other factors are fixed when we discuss one single factor. | neutral |
train_99767 | Since these two approaches still adopt head-dependent structure as the backbone of the translation rule, the freedom of generating translation candidates is limited. | in the future, we will explore more powerful features to better score the translation candidates. | neutral |
train_99768 | Both of them report improvement of about 0.9 point in BLEU score over the baseline on their dataset. | there are two typical problems for this approach. | neutral |
train_99769 | Conversion is done by (i) collecting conversion candidates from various utterances on the Web (e.g., Twitter postings), which are annotated with their authors' personal attributes (this paper deals especially with gender, age, and area of residence), and (ii) using syntactic and semantic filters to suppress the generation of ill-formed utterances. | we conducted two experiments to investigate the performance of our proposed method of converting sentence-end expressions. | neutral |
train_99770 | Table 6 lists the examples of the surviving candidates that are characteristic of the female attribute. | these utterances all convey the meaning that corresponds to 'Do you want to go to school?' | neutral |
train_99771 | We conducted two experiments to investigate the performance of our proposed method of converting sentence-end expressions. | the annotation of the authors' personal attributes to the postings was done based on the self-declarations by the authors. | neutral |
train_99772 | Eventually, some interesting findings are obtained. | in the following study, we will expand the research scope and test more interactions between synaesthesia and the cognitive modelling, through which we hope that our studies could contribute to bridging the research on synaesthesia in linguistics and that in neuroscience eventually. | neutral |
train_99773 | Both 聲 and 音 are in the auditory domain, which can denote the sound, thus, both of them can inherit common semantic information from the upper concept, namely sound. | similarly, Hong and Huang (2004), Hong and Huang (2005) also pointed that semantic/cognitive features embedded in perceptual near synonyms can influence their usages in the language. | neutral |
train_99774 | 聲 is much easier to be described through synaesthesia than 音, and also employs higher symmetry on synaesthetic selection of modifiers. | 聲 seems to be much easier and more common to be described than 音 through synaesthesia. | neutral |
train_99775 | This paper is devoted to a fine-grained study on the interaction between synaesthesia and the near synonyms 聲 and 音 of the auditory domain in Mandarin Chinese. | 聲 and 音, as a pair of near synonyms in the auditory domain, have both similar properties and different characteristics on synaesthesia. | neutral |
train_99776 | In the third and fourth procedures, the chat logs are exported by the corpus developers in charge of proofreading the annotations. | for the rest of the criteria, the workers answered using 6-point Likert scale: 6 for excellent, 5 for good, 4 for rather good, 3 for rather poor, 2 for poor, and 1 for terrible. | neutral |
train_99777 | In addition, there are three hypotheses about the production process of speech and gesture: the Free Imagery Hypothesis (Krauss et al., 1996(Krauss et al., , 2000de Ruiter, 2000), the Lexical Semantic Hypothesis (Schegloff, 1984;Butterworth & Hadar, 1989), and the Interface Hypothesis (Kita & Ö yzürek, 2003). | this hypothesis predicts that gestures do not convey the information which is not encoded in the accompanying speech. | neutral |
train_99778 | In addition, Chinese consists of a large number of homophones, which allows a syllable to correspond to many different characters with various meanings. | within a neuronal embodiment account (e.g., Pulvermueller, 2013), we could argue that it is due to differences in the neural encoding of the two modes of training. | neutral |
train_99779 | 2005, Stanojević andSima'an 2014). | the validity of this method, which makes the evaluation of sentence-bysentence comprehension possible, has been acknowledged (Ross 1998). | neutral |
train_99780 | These properties have been used to statistically classify learners' output into a range of proficiency levels (Thewissen 2013). | their learner corpus was composed of data from 30 native English speakers and 90 EFL learners classified into three levels according to tOEIC scores. | neutral |
train_99781 | Their learner corpus was composed of data from 32 EFL learners classified into three levels according to Test of English for International Communication (TOEIC) scores. | the present study proposes to annotate listening comprehension data for individual sentences, which is expected to offer a finergrained analysis for the identification of learners' linguistic problem areas. | neutral |
train_99782 | This study proposes a rule-based, signal-processing agent-based model to reveal the dynamics of language development in early infants. | as the examples shown in Table 1, infants are gradually aware of what they hear, where the sounds come from (sound localization) and what are the differences between them (consonant and vowel distinctions). | neutral |
train_99783 | So, the Rule R1: if the SA is composed of more than one token and a common SC is shared between tokens, then the relevant SC is the shared one. | the third SB characterizes the intransitive syntactic construction "SV" ((S) followed by a (V)). | neutral |
train_99784 | 脈 mai "meridian": "The blood vessels, distributed all over the human body and animal body, carry blood everywhere." | we choose to analyze the four atypical body parts: 血, 肉, 骨, 脈, each of which is defined by the online dictionary compiled by the Ministry of Education, Taiwan (MOE Dictionary) as 1 血 xie "blood": "The red fluid in the veins/vessels of higher organisms, which starts from the heart and circulates throughout the body. | neutral |
train_99785 | (14) has the extra than phrase 'than the page limit' within the izyooniclause. | 3 Note that the proposition is now type <s,t> due to the situation semantics. | neutral |
train_99786 | Their primary goal is to account for the positive implication of izyooni-comparatives. | at the same time, it suffers from the same problem that Shimoyama (2012) does, namely, overgeneration. | neutral |
train_99787 | Not: 'That paper is 2 pages long.' | in doing so, we need to come up with non-event semantics, because many izyooni-comparatives, including (1) and (13), are not eventive. | neutral |
train_99788 | To my knowledge, there are three previous studies of izyooni-comparatives. | 35 other analyses are somewhat compatible with the lack of positive implication in (35). | neutral |
train_99789 | On the first set of all DDs, three out of four slopes of the line-of-best-fit have positive slopes, but the slope for Adjunct island type has a negative slope. | the prosecutor"s dish was lost because it was not based on fact.? | neutral |
train_99790 | Crucially, there are no significant interaction effects of GAP-POSITION and STRUCTURE for the Adjunct and Subject island types. | we performed the two pairwise comparisons on the embedded GAP-POSITION condition and the non-island STRUCTURE condition to test for each independent effect of STRUCTURE and GAP-POSITION. | neutral |
train_99791 | Some other nominal categories, such as entity nouns and dongzuo-marked nouns, allow individualization in two different dimensions. | dongzuo-marked nominal is an event noun while xingwei-marked nominal tends to be an entity noun. | neutral |
train_99792 | (SC) any raise_the_investment DE action 'It is not recommendable for investors to increase their investment in any forms.' | such a difference may not be applicable to all the entity nouns. | neutral |
train_99793 | This is one of examples showing the property-sharing constraint. | since the topic is the highest position of the forward looking center ranking, the last candidate is likely to be the antecedent of the zero object. | neutral |
train_99794 | The accuracy of the proposed method reached 73.37%. | the object of the sentence B is not translated in German in either MT system. | neutral |
train_99795 | In this paper, we prepare negative words related to trouble information about a target product, such as "bad", "wrong" and "failure", as a negative word set. | imperfective forms The target tweets in this paper are written in Japanese. | neutral |
train_99796 | We introduce three characteristics and a scoring method to the bootstrapping. | for the TS extraction process, we judge the presence of TEs in each sentence. | neutral |
train_99797 | This is also important future work. | trouble information is a fraction of a percent of all tweets on Twitter. | neutral |
train_99798 | We acquire only TEs with high confidence values as new TEs. | combined with the difficulty of the concrete definition of non-trouble tweets, collecting non-trouble tweets with high coverage is also a difficult task. | neutral |
train_99799 | In our research, we use the words in tweets to calculate the speech components (as mentioned in §2.2 Step 5) for each user, and these speech components can be used for inferring Societas personal values with TwitterSocietas Model. | input: speech keywords KW for value component j, word set {W C} for all tweets, and the binary set V j of value component j. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.