id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_99000 | does correctly extract their objects. | for instance, many verbs share the [..PP V NP] structure. | neutral |
train_99001 | We first describe in Section 2.1 the DTW algorithm used by Church and Gale in [1]. | we then detail the results of the Arcade II evaluation campaign [2], for which we took part to two tasks: alignment between 5 european languages, and alignment between French and 6 languages not based on the latin alphabet. | neutral |
train_99002 | They are (gh), (ddh), (dh), (bh), (dzny), (dzh). | it is necessary to set up principles in advance while compiling a dictionary. | neutral |
train_99003 | Dequan Zheng et al. | documents re-ranking is a method to sort the initial retrieved documents without doing a second retrieval. | neutral |
train_99004 | It is convenient to find out the possible Eg candidates which contain the 'v' concept according to the knowledge-base. | the eigen semantic chunk (EK) is not only constituted of the verb, but also the noun and other words. | neutral |
train_99005 | Generally the Eg roots in the head verb. | according to the denial rules, some 'v' concepts are rejected as Eg candidates. | neutral |
train_99006 | In some applications of opinion mining in text, it is important to distinguish what an author is talking about from the subjective stance towards the topic. | we totally collect 86 popular film terms from the corpus and online film glossary 3 . | neutral |
train_99007 | Now, it has been extensively used in text filtering, public opinion tracking, customer relation management, etc. | the proper nouns, such as titles of films are not included in the targets. | neutral |
train_99008 | Overall, the resulting method is able to produce the correct underlying forms of the surface forms of 96% of the tokens in the test data, having a 4% error which is due to the d-r alternation rule. | the resulting method is able to produce the correct underlying forms of the surface forms of 96% of the test data, having a 4% error, which is attributed to d-r alternation. | neutral |
train_99009 | The monophthong formants were measured using LPC method. | this is believed to be caused by acoustic model of monophthongs. | neutral |
train_99010 | From the observation of three subjects, tone acquisition order accords with previous researchers and TS rules acquisition order are TS3/TS4>TS2>TS1, which shows that the order of TS acquisition is influenced by tone acquisition. | two decades after Chao's work, Li and thompson (1977) conduct the first systematic study in acquisition of tone of Mandarin. | neutral |
train_99011 | Furthermore, because this correction was applied only to the all-adverb group, there was no change in the number of tagged adverbs. | adjectives were chosen that had affirmative and negative characteristics to become a seed, and they were used to extract other adjectives that co-occurred to judge the subjective characteristics of each in view of the co-occurrence. | neutral |
train_99012 | 20wf We distinguish well-formedness values from activation values since we assume that the former is necessary for gaining the whole well-formedness of phrases and this is not altered once the value is obtained, while the former is changed by the conditions in (6). | to capture the linear-order effects, we formulate a memory-based model that predicts fine-grained degrees of acceptability. | neutral |
train_99013 | In our memory-based model, (1b) is predicted to be unacceptable for the reason outlined in the end of section 4. | the function g is defined so that, with an auxiliary assumption, we would be able to model the degree of deactivation through the passage of time. | neutral |
train_99014 | The parser synt provides a command line interface, which is suitable for all forms of batch processing. | the third source of information for VerbaLex is the syntactic lexicon of verb valencies denoted as BRIEF, which originated at FI MU Brno in 1996 [17]. | neutral |
train_99015 | Since the project is exposed to these risks by its design, a priority for the project members is therefore to tackle this issue from the very beginning. | we consider that a step further could consist in identifying more precisely the nature of these "associations". | neutral |
train_99016 | We have increased the entries of our dictionary from 33,438 entries to 120,769 entries. | from a total of 217,913 unique names, they give 619 distinct family names and 75,581 distinct given names. | neutral |
train_99017 | In [5], some pruning steps have been applied to delete some false unknown words. | we need to extract more words to be added to the dictionary. | neutral |
train_99018 | would become Whole BNF descriptions are collections of named rules. | there are four components making up the BNF: (i) terminal symbols, e.g. | neutral |
train_99019 | For the calculation in the natural language on line, it is necessary to introduce several formulas as follows: σ 1) W2(σ 2) …Wk(σ k) We now describe an example to illustrate the application of Theorem 1. δ is defined as follows: In this paper, we have considered some basic probabilistic models of computation by inputting strings of words, and some new conclusions have been discovered. | a major issue to decide upon is how to represent the BNF syntax as a data structure. | neutral |
train_99020 | the largest clustering algorithm applicant templates training process, to speed up the pace of Hmm parameters optimized, through reference extensive documentation that the largest clustering only have a greater impact on the first iterative process, but follow that the iterative accelerate effects are not obvious, on the contrary, an increase of the complexity and fewer training data may increase errors. | isolated word speech recognition system, with broad application prospects, such as computer control, industrial control orders, family services, banking services, personal information identification, personal items such as mobile communications, the application of this technology will greatly facilitate the daily lives of people. | neutral |
train_99021 | Secondly, the questions in training set are numbered, so they cannot cover all kinds of query way. | if a word has two different first sememes at least, the different first sememe belongs to different classification. | neutral |
train_99022 | So the similarity score SIM1 = 0.8. | we have evaluated the presented new approaches using an English-Arabic parallel corpus. | neutral |
train_99023 | Then word probability which represents the probability of selecting a target word among all other target words in the same sense division is computed [2]. | an automated approach is presented for resolving target-word selection, based on "word-to-sense" and "sense-to-word" source-translation relationships, using syntactic relationships (subject-verb, verb-object, adjectivenoun). | neutral |
train_99024 | Let's take "工" as example and add E-HowNet definitions for each class, shown as table 1: Table 1. | morphology of Chinese allows new words to be generated by compounding and affixation. | neutral |
train_99025 | For instance, for the sense class of "labor|工人", there are three different definition patterns lead by the features of "telic", "gender", and "age" in the same class. | the definition can be glossed as: "a sound recorder is a machine which functions as the instrument of sound-recording activity". | neutral |
train_99026 | According to the opinion of semantic network, the words with concrete concept has the largest concept differentiation, the next comes the synthesis concept between concrete concept and abstract concept. | they are at the bottom of differentiation level graph of concrete concepts, for reason of it has the least differentiation. | neutral |
train_99027 | For example,the phrases of positive-sequence, such as "长| 兔 毛 "," 高 | 鞋 跟 "belong to the combination of Modificatory Combination firstly and Noun Conglomeration Combination secondly. | the concept of individual person is the person concept except the concept of aggregate person. | neutral |
train_99028 | The above paragraph can be summarized as rule 1 below. | the example of inverted sequence, such as "金 丝猴"、"长颈鹿"、"高跟鞋"belong to the category of new words and these facts negate the rule of decreasing differentiation. | neutral |
train_99029 | In constructing n-gram models, we always face two problems. | for specific domains, n-gram models usually suffer from the data sparseness problem, because large amounts of domain-specific data are usually not available. | neutral |
train_99030 | The experimental result is acceptable and conforms to human's intuition. | 4:When father nodes of all concept nodes are found, we can easily describe the relationship of the sememes through concept-sememe tree. | neutral |
train_99031 | 3:The method of searching the father node from concept nodes of the second type: to any concept node (Node(i)), find the nearest colon j which is ahead of Node(i), if the number of the symbol "{" is one more than the number of "}" between the area of Node(i) and colon j, so the concept node which is ahead of colon j is the father node of Node(i). | associated words usually have no similarity such as the word pair "吃 (eat)" and "食物(food)" etc. | neutral |
train_99032 | We'll refer the phrase "target text" below to the text currently being segmented. | is the probability of character which is estimated by the total occurrences of the word normalized by N, namely: equation 5 is represented as follows: 3 WS under Chinese Sign Language Environment In the course of WS, efficiency will be reduced gradually with the increase of the length of sentence, so it performs the preprocessing to the target text got from the Web. | neutral |
train_99033 | Consider the following sentence: (6) watasi-ga nihon-e iku toki-ni purezento-o katta. | the subject word hana and the topicalized word zoo have a part-whole relation. | neutral |
train_99034 | For each text, the precision and recall are computed to evaluate the quality of the summary. | our calculation of importance of a sentence also depends on these two components. | neutral |
train_99035 | It is propounded that a general nongap topic has certain semantic linkage with a comment and thus is theta-marked and occupies an A position. | when an overt pronominal like ta 'it' is substituted for the EC (e i ) in (1), the acceptability of the sentence seems to be lower. | neutral |
train_99036 | Kappa statistics is used to check the inter-rater agreement between human annotators (Cohen 1960, Carletta 1995. | the context contains keori which is ambiguous among 'street' and 'distance' and contains collocations that occurs in both sense of keori. | neutral |
train_99037 | The well-known Distributional hypothesis (Harris 1964) claims that a sense of an ambiguous word is dependent on collocations around it. | the Kappa value of our experimental results is 0.88 which proves the reliable agreements. | neutral |
train_99038 | The four dictionaries produced by four different companies who claim that the dictionaries they produce include authoritative dictionaries like Oxford and Longman. | we cannot simply assume that dictionaries 3 and 4 are better than the Longman one. | neutral |
train_99039 | Our approach is based on large-scale bilingual corpora, and the computation is implemented by applying the theories of VSM and lexical mutual information to traditional IR. | it is always a tough job to build a large-scale high-quality bilingual corpus. | neutral |
train_99040 | It is important to note that DLRs can similarly be type-based, such as wordnets or precision grammars, or token-based, such as treebanks or sensebanks. | systems generally employ tagging and/or parsing to form token-level subcategorisation frame hypotheses, based on manually-constructed templates. | neutral |
train_99041 | For the substructures for temporal entities and relations, ISO-TimeML already has a partial formal semantics including a small set of translation rules into ITL, which can be reused adapting them to the use of events rather than temporal intervals (except when temporal entities are to be interpreted). | for ease of reference we have labelled the relevant substructures in the example in boldface. | neutral |
train_99042 | For a singleton set {a} we have the obvious correspondence: SR'({a}) = SR(a) for every semantic role predicate SR. | the distributivity and relative scopes of quantified NPs in a sentence is nearly always only partially specified by the sentence and even not even fully by information from the context in which the sentence is used. | neutral |
train_99043 | It needs to be repaired, and repairs cost. | any given process, that is, any sequence of events linked by change relations, is a scale whose partial ordering is induced by the predicate change. | neutral |
train_99044 | In this paper, to avoid an overgrowth of notation, I will write where, strictly speaking, I should, in the ontologically promiscuous notation of Hobbs (1985a), write change(e 1 , e 2 ) ∧ p (e 1 , x) ∧ q (e 2 , x) This says that there is a change from the situation of p being true of x to the situation of q being true of x. | we decompose this goal into two subgoals-containing the coffee in the cup and moving the cup. | neutral |
train_99045 | A person's salary at a particular point on the money scale: John's salary reached $75,000 this year. | a very common pattern involves a change of location: there is a change from the situation of x being at y in s to x being at z in s. Here, at is the abstract figure-ground relation, so any domain conceptualized in terms of that automatically inherits the vocabulary provided by a theory of change. | neutral |
train_99046 | Chelswu-Nom what movie watch-Q 'What movie does Chelswu watch?' | the same contrasts are found in Korean,as in (15). | neutral |
train_99047 | S. Lee (2006) proposes that bare NPs without Acc markers are more restricted or "marked" from the perspective of neo-Gricean pragmatics, utilizing Levinson's (2000) pragmatic Case, on the other hand, gives rise to "hard" semantic/pragmatic effects such as specificity, definiteness, D-Linking, and the like. | long.time-at M.-Nom a/a.certain man-(Acc) meet-Past-Dec '(Long time ago) Mary met a man.' | neutral |
train_99048 | (6) a. Mary-ka Chelswu-∅ manna-ss-e. Mary-Nom Chelswu meet-Past-Dec 'Mary met Chelswu.' | in this case, a speaker uses ka-marked nominal X in order to convey the following meaning: the meaning of 'X (and only X) ...' or 'it is X that ...' the nominal with -ka is generally a discourse-new information. | neutral |
train_99049 | In this case, additional morphology is required in order to achieve the specific interpretation. | it is naturally expected that malfunctioning items will be returned to the store within a warranty period. | neutral |
train_99050 | The current system, without a dictionary, is fairly successful in parsing texts transcribed wherever applicable by Joyo Kanji (frequently used Chinese characters), but it is not able to parse successfully sentences containing successive phrases transcribed in Hiragana, because it recognizes the beginning of a phrase by finding a character that is not a Hiragana. | in addition, it resorts to another feature characteristic of Japanese (a head-final language) that the functional part is located at the end of a phrase. | neutral |
train_99051 | At present, the current system is equipped with a heuristic means of handling verb phrases, for instance, by listing each possible form. | it fails to parse correctly such texts as those found in books for young children that are written without using Chinese characters. | neutral |
train_99052 | It is also important that as the size increases, the precision also increases. | semantic orientation syntactic piece any phrase ⇒キレイ (beautiful) any phrase ⇒使い-やすい (easy to use) any phrase ⇒美味しい (good taste) 飲み-やすい (easy to drink) ⇒any phrase any phrase ⇒良い-ない (no good) any phrase ⇒使い-にくい (hard to use) any phrase ⇒まずい (bad taste) いまひとつ (unattractive) ⇒any phrase 不具合-が (trouble) ⇒any phrase negative Table 6: An example of the generalized dictionary positive Table 7 shows the results of each domain using extended dictionary. | neutral |
train_99053 | Actually, even though we used general corpus consists of weblog, precision and recall have increased. | we think that this is an encouraging result since there is an possibility to further extend the dictionary with keeping this precision high, according to the discussion above. | neutral |
train_99054 | Adding to that, all linguistic translation aids are suggested automatically and simultaneously in "proactive" behavior. | this paper has described the two levels of functionality in the experimental computer-aided translation environment BEYtrans. | neutral |
train_99055 | A first experimental version of BEYTrans has been completed and deployed on the Web (BEYTRANS, 2007). | to them in an asynchronous and "proactive" manner. | neutral |
train_99056 | From the basic types we build simple types by taking adjoints or repeated adjoints. | the typing in the pregroup formalism assumes that the verb is the central point of the sentence. | neutral |
train_99057 | For customization of our POS tagger, we tuned about 6,000 lexical and about 1,500 tri-grams. | it means that about 87.60% of all translations were understandable. | neutral |
train_99058 | The POS tagging module of English-to-Korean patent machine translation system is based on lexicalized HMM (Pla & Molina, 2005). | the number of the sentence that were rated equal to or higher than 3 points was 438. | neutral |
train_99059 | Methods One calculates the position of (w, z) where it is of shortest distance from (0, 0). | these methods are elaborated below. | neutral |
train_99060 | For example, in expressing potential forms, we use a modal representation to modify the head, which is a verb: ( The meaning of a modal differs when occurring in different relative positions with the negation marker. | for example, the word 小子 xiaozi 'lad' refers to someone who is young. | neutral |
train_99061 | The complete inventory of modal meaning representations is as follows: In the following table we give an example for each modal meaning: Some words have modal representation in E-HowNet simply because they are modals. | a word in E-HowNet can be defined with simple concepts, sememes, or a mixture of simple concepts and sememes interacting with features. | neutral |
train_99062 | An examination of this corpus showed that the corpus also contained documents in languages that are closely-related to the identified minority language. | these documents are used as inclusion and exclusion terms for the query. | neutral |
train_99063 | Sets of relevant and non-relevant documents are taken as initial inputs. | the terms that were selected by the query generator for the relevant set are most likely unique to each of the target languages. | neutral |
train_99064 | (22) a. I khi chhai-chi-a be bo saN. | as shown in (22b) and (23b), long 'always' appears to c-command bo 'not' so that long is structurally higher than bo . | neutral |
train_99065 | First I induce core grammatical functions and semantic interpretations of SFPs. | a feature-driven approach is not favored. | neutral |
train_99066 | This situation is not preferred by the system. | if a sentence lacks a major part such as an NP or a VP, the Syntactic Refiner will combine it with its adjacent sentences. | neutral |
train_99067 | A well-known syntactic property of V-e-kata compounds is the intervention of other morpho-syntactic elements. | a possible solution is some sort of backward operation from syntax to the lexicon. | neutral |
train_99068 | Furthermore, these verbs are particularly important from the perspective of lexical semantics, in that they all have the meaning component of MOVE or GO which is the foundation of all motion events in Talmy's (2000) cognitive theory of lexical semantics. | such elements as a topic marker, delimiters like -man 'only' and -to 'also', plural/manner markers, and even a case marker can intervene between the V-e and -kata (K-H Kim 1996, C-S Suh 1996, C-H Lee 2006). | neutral |
train_99069 | 1 the semantic relation not exist rel selects possibility as its ARG1 value. | their combinatorial possibilities with respect to the complement and predicate types call for a much finer-grained syntax. | neutral |
train_99070 | The possible revised lexical rule would contain (6) as the VP specification as the value of COMPS. | others include passive constructions and topicalized structures. | neutral |
train_99071 | For example, in Prolog the normal binary tree structure is often implemented recursively: is the well-known Prolog notation for the tree diagram in which S goes to NP and VP. | in order to process Japanese sentences efficiently we need to take into account when lexical rules are applied together with transition rules. | neutral |
train_99072 | In many works on reduplication patterns, CR has been unfairly regarded as a subclass of mere repetition or lexical duplications that simply function as an intensifier. | zimmermann's (2006) Contrastive Focus Hypothesis characterizes contrastivity in the sense of speaker's assumptions about the hearer's expectation of the focused element. | neutral |
train_99073 | The difference is clearly shown in (2). | pragmatic approach to focus makes no specific reference to linguistic patterns. | neutral |
train_99074 | Because of the high degree of world immanence, eventualities are available to immediately subsequent with ta 1 . | as far as the adjectives are concerned, only intransitive adjectives are observed, for examples, wu 2 liao 2 boring, chi 2 duen 4 slow (in thought or action), za 2 complicated, bu 2 iao 4 lian 3 shameless, bu 4 shuang 3 angry, wue 2 xian 3 dangerous, jin 3 zhang 1 nervous. | neutral |
train_99075 | Moreover, the motivations for the change are also another issue which deserve further research. | the adequacy of the assumption needs further study. | neutral |
train_99076 | It is not specific that the motion verb denoting to "arrive at" becomes an excessive structural particle because there are many southern dialects displaying such a phenomenon. | since we failed to find dao's use as an extent marker in ancient Mandarin, it might be until this modern time it is used to indicate excessive extent. | neutral |
train_99077 | Comparing with the great number of papers about de, there is no considerable amount of studies regarding the excessive structural particle dao, possibly because of the lack of long history. | dao could appear independently as a final particle (see section 5). | neutral |
train_99078 | Each dialogue consists of one speaker's turn that includes at least one utterance expressing the speaker's anger 2 , and its preceding and following turns that support the fact that the utterance expresses the anger. | these expressions do not affect the informative meaning of the utterances but convey additional emotional meanings or attitudes along with it. | neutral |
train_99079 | The speech signals for anger and joy have quite similar pitch and intonation. | as a preliminary result from the user study, we constructed patterns of sentences and their speech acts carrying particular emotions. | neutral |
train_99080 | Such associative information is apparently sufficient to support system users in improving their associative thinking and creativity by encouraging them to move beyond literal, direct and superficial aspects to richer, freer, and more inspired conceptual associations. | also in contrast to the aCD, which only examined nouns, the JWaD is surveying words of all word classes. | neutral |
train_99081 | The Japanese standard word order is "渋谷でのキャンペーン" ("渋谷"=Shibuya (location name)), "での"=in, "キャンペーン"=campaign), "けいばとは何" ("けいば"="horse race", "とは"=be, "何"=what), and "えひめへようこそ" ("えひめ"=Ehime(location name), "へ"=to, "ようこそ"=welcome), respectively. | adjectives (40 types, 64867 tokens in total) adverbs (5 types, 23258 tokens in total) Prepositions (6 types, 1816 tokens in total) Verbs (3 types, 741 tokens in total) | neutral |
train_99082 | Using only katakana expressions, we compiled a list of expressions in order of frequency (Tables 3 and 4). | the adjectives and adverbs may be special expressions. | neutral |
train_99083 | The answer is the list of nes of quadruples in L. If related-to of a question is 'of', we also search for quadruples whose object is embedded in category (case (d) as discussed in Section 3.2). | these two conditions are used in the pattern generation procedure as described in Figure 5. | neutral |
train_99084 | Occurrences: An occurrence of a 〈person, category〉 tuple is defined as a 4-tuple: where middle is a string surrounded by person and category. | a middle of an occurrence is not necessarily reliable, (Nguyen and Shimazu) proposed a method to retain reliable ones based on two criteria: repetition and diversity as follows: Repetition of a middle (repetition(middle)) is the number of times the middle appears between the person and category of (person, category) tuples of same person. | neutral |
train_99085 | Syntax parsers can benefit from speakers' intuition about constituent structures indicated in the input string in the form of parentheses. | many syntactic boundaries may well fall within a group. | neutral |
train_99086 | With regard to multiple subjects, a number of linguists, e.g. | the derivation in (1c) allows the NPs to compose non-standard constituent cluster, as shown in (2b): (2) In this paper, we discuss the nature of non-standard constituent cluster in Japanese. | neutral |
train_99087 | Long-distance dependencies (LDD) are resolved on f-structures using LDD path frequencies acquired from the f-structure annotated treebank and automatically acquired subcategorization frames (O'Donovan et al. | table 2 shows the numbers of the core arguments in the Gold Standard f-structures and the numbers of zero pronouns of each core argument (SUBJ, OBJ, OBL). | neutral |
train_99088 | The experiments show that the morphology-based approach and the probability-based approach improve the f-scores of the annotation algorithm in terms of the pred-only f-scores of the sentence as a whole. | the results of zero pronoun identification for OBL are lower than that for OBJ, because of the ambiguity of "ni" marked NPs. | neutral |
train_99089 | The communication of the sentence analyzer, which uses the CGI (Common Gateway Interface), is to send the sentence to the analyzer and receive the result of it. | for expert users, our system adopts hot keys for every function to improve the efficiency, for example, CTRL-I for insertion of a morpheme or a chunk. | neutral |
train_99090 | *Mary-ka nuwkuwnka-eykey [John-i etten umsik-ul -Nom someone-to -Nom some food-Acc cohahanta-ko] malhayss-ciman, kunye-nun [nuwkuw-eykey like-Comp] said-but -Top whom-to etten umsik-inci] kiekhaci mos hanta which food-Q remember not do 'Mary said to someone that [John liked some food], but Mary cannot remember to whom which food.' | the fact that the locality restriction holds even in rightward movement-forbidding languages does not provide a direct argument against Lasnik's account for the restriction in English. | neutral |
train_99091 | In other words, it can take cyclic movement which consists of QR followed by wh-movement. | when we suppose that examples like (4) in English are genuine multiple sluicing which is derived by deleting a TP after multiple wh-fronting occurs out of it, an immediate problem facing us is why the example without TP deletion as in (6) is ungrammatical: (6) *They didn't tell me [which] [for which] got something The following pair also makes the same case. | neutral |
train_99092 | There are, however, the multiple sluicing constructions where two indefinite expressions in the antecedent clause are apparently existential, as in (38), which is cited from Lasnik (2007) If existentially quantified expressions can take freer scope than universally quantified ones as argued by Pesetsky (1987) and Reinhart (1997), the unacceptability of (38) with the complex TP deleted is unexpected, which raises a problem with our proposed analysis. | we will investigate whether examples like (4) are analyzed on a par with corresponding examples in Bulgarian and Serbo-Croatian. | neutral |
train_99093 | Nonetheless, the strength dynamic has an added benefit in terms of the lexical category information. | speakers control the prosody of an utterance in order to signal linguistic and affective information. | neutral |
train_99094 | Lastly, Thai sentences sometimes contain discontinuous sentence constituents in their construction. | in addition, we utilize a prosodic encoding scheme that integrates both syntactic and rhythmic constraints. | neutral |
train_99095 | The duration of a foot will differ somewhat depending upon the phonetic structure of the syllables comprising it. | while a noun, the head of a noun phrase, always precedes its modifying adjectives and determiners, the verb phrase exhibits less consistency. | neutral |
train_99096 | Multiple sentence hypotheses are thus processed simultaneously by pruning the network through the propagation of various constraints. | the naturalness issue can be attributed to the lack of sophisticated prosody-generating scheme. | neutral |
train_99097 | (2003), Culotta and Sorensen (2004), Bunescu and Mooney (2005). | among the three different instance representations except CPT, the T-CPT (highlighted in bold font) achieves slightly better performance of 2.0/0.4 units in F-measure than the other two representations B-CPT and E-CPT respectively. | neutral |
train_99098 | There are many metrics to evaluate summaries but precision and recall measures are used extensively in ideal summary based evaluation of summarization systems. | we reviewed some ideas about making this construction task automatic. | neutral |
train_99099 | NMF is a dimensional reduction method and an effective document clustering method, because a term-document matrix is high-dimensional and sparse, from Xu et al. | by the sign of the i-th value of y, we can judge whether the i-th data element belongs to cluster A or b. y D z 2 / 1 = Note that Eq.4 is the object function when the number of clusters is two. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.