id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_99400 | Several approaches for this step have been proposed such as the longest common substring (Agichtein et al., 2001), substrings in suffix trees (Ravichandran and Hovy, 2002;Ruiz-Casado et al., 2007) and edit distance based alignment (Ruiz-Casado et al., 2007). | finally, unlike other approaches, the Espresso approach only considers the relationships between the targets and the patterns. | neutral |
train_99401 | In Sections 5, we noted that the CEP RA achieved a stable performance on different relationships. | such an evaluation is usually done using a sentence collection different from the one used for producing the extraction patterns. | neutral |
train_99402 | However, there may be some overly-general patterns that are characterized by high coverage and low precision. | in this section, we focus on the main concept about estimating pattern's reliability with consistency measurement between different sentence collections and measure the seed's quality by reliable patterns, described in the following sub-sections. | neutral |
train_99403 | To explain the intuition behind our approach, consider an "oracle collection", S O , which contains all sentences describing the targeted relation that we can find on the Web. | there may be some overly-general patterns that are characterized by high coverage and low precision. | neutral |
train_99404 | Nevertheless, the systems have the problem of extracting names without clue words or in ambiguous context. | for the organization names, although they do not appear in the regular patterns like person names, the way they appear in the text seems like a pattern. | neutral |
train_99405 | Moreover, one word can have more than one part of speech. | moreover, in Thai, lots of names, especially location and organization names, are in the same forms as general phrases. | neutral |
train_99406 | Apart from the highest number of samples in the corpus, the person names usually appear in regular patterns such as occurring with the titles, the first names separated from the last names by spaces, etc. | apart from presenting the NER systems, this study is conducted to find out if the more informative answers can improve the system performance of Thai NER like in Chinese or not. | neutral |
train_99407 | Traditionally, anaphoricity determination has been tackled independently of coreference resolution using a variety of techniques. | "himself" in 2-2) often works as a short-distance anaphor and is locally bound by its antecedent (e.g. | neutral |
train_99408 | The bi-gram "实际(…)出发" bears a higher co-occurrence frequency from the corpus based on statistical models. | the approach, aimed at low frequency word pairs which lead up to poor performance in pure statistical-based associate measure approaches, proposes distinct models reflecting diverse linguistic knowledge to improve both precision rate and recall rate. | neutral |
train_99409 | (Naxi) what NOM read finish 'What did Aka finish reading?' | with that in Japanese, however, the marking on the arguments in Naxi is optional. | neutral |
train_99410 | Responding to the increasing social demand for CI interpreters and English users of certain CI skills, a number of universities and colleges in China have offered CI as a compulsory core course for English majors. | male learners might be less driven by the fixed time for interpreting and less affected by the environment. | neutral |
train_99411 | The performance of different strategies for formula (2) will be experimentally investigated in section 6.3. | but the size of RG65 is limited, to train and test algorithms on fixed subsets of it will causes overfitting and bias. | neutral |
train_99412 | Early research efforts have been devoted to design the knowledge-based measures, in which word synonym set in thesaurus plays an elementary role to word similarity calculation. | the average correlation coefficient between this strategy and human judgments of Rubenstein-Goodenough's is 0.8499, which does not bring a better correlation than using S 5 alone. | neutral |
train_99413 | Note also that a substring which has not been treated before, e.g. | an English->Malay MT system with 100,000 translation examples annotated in the S-SSTC has been constructed based on the implementation frame as described above. | neutral |
train_99414 | The dobj link between "clean" and "surface" indicates that "surface" is a direct object of "clean", and we could rely on such dobj links to identify VN pairs in the corpus. | the 100 th most frequent English verb is "lack", and it appeared 47 times. | neutral |
train_99415 | When providing five recommendations, only about 88% of the time the recommendations of our system can include the correct translation. | we did not consider the cases that our systems could not answer in computing the statistics in Table 4. | neutral |
train_99416 | Methods that employed more information could choose translations more precisely, but were less likely to respond to test cases. | pr("增加" | "add") = 48/135=0.356 and pr("加上" | "add") = 2/135=0.015. | neutral |
train_99417 | Here, social-relationship is assumed to be represented by a combination of {relative social position among in the people in same group} and {in-group/out-group relationship} in our study. | this system uses the framework of the system proposed by Shirado, but our system is more practical than the previous one due to the following points: 1. | neutral |
train_99418 | The system uses judgment rules constructed from textbooks of Japanese honorifics. | so, more practical system to learn linguistic honorifics is needed. | neutral |
train_99419 | According to Lin and Peck, motion morphemes are first classified into scalar change (e.g., luo "fall", hui "return", jin "enter", equivalent to Talmy"s (1975) path verbs) and nonscalar change motion morphemes (e.g., gun "roll", equivalent to Talmy"s (1975) manner-of-motion verbs) depending on whether they lexicalize a scale or not. | a more refined explanation is necessary for the order of motion morphemes in Chinese. | neutral |
train_99420 | Thanks to subevent analysis it is possible to isolate, and thus to represent, the contribution of the verb. | the relevant distinction here is that between individual-level (IL) and stage-level (SL) adjectives. | neutral |
train_99421 | Hengeveld 1992: 32), verbs with predicative complement are part of the predicate, determining at least partly the content of the event and argument structure. | in this paper, i try to show the advantages of analyzing events in terms of subevent structure, by taking into consideration the case of a specific class of verbs: transitive and intransitive verbs which obligatorily require the presence of a predicative complement (e.g., English seem, consider). | neutral |
train_99422 | The CM-group participants did not make significant progress on any of the four categories, though they received higher scores averagely in the posttest. | regarding the sentences containing metaphoric/metonymic expressions, the CM group showed no significant progress whereas the MM group showed significant differences between two tests. | neutral |
train_99423 | We used as Optical Character Recognition (OCR) such as ABBYY FineReader software, which is able to recognize Malay texts. | we proposed a method for building rules and gazetteers for Iban language. | neutral |
train_99424 | We tested the accuracy of NERSIL by comparing its results against human annotated texts, which have been labeled by local Iban native speaker. | we built a JAPE rule that can recognize the pattern <Name><Father's affiliation><Father's name>. | neutral |
train_99425 | Our parser shows a competitive performance in rule-based parsing. | the scalability problem indicates the difficulty in incorporating other types of knowledge for use in parsing. | neutral |
train_99426 | Since the corpus is not exactly aligned, we aligned nearly 400,000 sentences across 11 languages properly. | in order to be able to apply the previous proposed method to various languages, we want to segment in a fully automatic and universal way sentences in different languages into sub-sentential units like chunks. | neutral |
train_99427 | 4.1.3 was less than or equal to 0.29, a very low figure. | one is that we used very large corpora covering a much wider range of areas so that we could extract a broader range of parallel translation expressions. | neutral |
train_99428 | In the first level of evaluation, we choose the emotion with the highest emotion value from the predicted emotion vector as the predicted document emotion. | under the relation assumption between document emotions and word emotions, we introduce term relevance and term frequency in the process. | neutral |
train_99429 | According to Goldberg, the semantic meaning of (5) can only be derived by taking into account of the entire construction. | while induct has at least five senses, the senses fall into three semantic domains, as designated in < >, namely, social, creation and communication. | neutral |
train_99430 | The organizations in training data are different with those in test data, which leads that we could not train a classifier to a certain organization. | the words in homepage are more related and indicative to the organization. | neutral |
train_99431 | By mid-80s, CSLI (Center for the Study of Language and Information) was founded by these philosophers and others in linguistics, psychology, and computer science at Stanford University and at the research centers surrounding the university, namely Xerox PARC and SRI International. | this picture seems to well represent the current situation of the world in which we live by exchanging information in the most efficient way with a tiny mobile gadget. | neutral |
train_99432 | In short, a nonthematic argument is not needed. | the grammatical space of GFs and the related voice types in Indonesian are not the same as in English. | neutral |
train_99433 | As the grammar becomes larger, the rules and related constraints become more complex. | the parallelism is captured by having the same feature attribute VOICE-TYPE, whereas typological variation is captured by allowing different languages having to have different voice values. | neutral |
train_99434 | Some of them, which appear in fixed forms, should be registered in a dictionary. | various parsers, based either on phrase structure grammars or on dependency structures, have been developed, applying various machine learning techniques on syntactically annotated corpora. | neutral |
train_99435 | For example, searching for keywords relating to earthquakes or influenza allows for impressive results to be achieved in earthquake detection or influenza outbreak analysis (Sakaki et al., 2010;Ritterman et al., 2009). | as such, we might conclude that social media data is a foe of NLP, in that it challenges traditional assumptions made in NLP research on the nature of the target text and the requirements for real-time responsiveness. | neutral |
train_99436 | Some of these underlying errors were ultimately domain-dependent and due to nonstandard language in our (spoken) corpus. | to measure frame coverage, we used BFN frame classes mapped from the assigned EngGram frame categories, checking if the frame in question was associated with a BFN sense for the verb in question. | neutral |
train_99437 | Table 4, hángmó bsài 'model airplane competition' first requires the creation of a model airplane (the agentive role), and then the function of different models (the telic role) is compared. | according to temporal properties, the partial orderings of qualia roles are: agentive < Formal, Constitutive o Formal, and , N 1 can involve in more than one event. | neutral |
train_99438 | In an [N 1 + bsài 'competition'] compound, N 1 specifies the subject of the competition. | when N 1 is an event nominal, pure selection is usually at work, while when N 1 is a pure event noun or an entity, type coercion happens. | neutral |
train_99439 | It remains unanswered and should be investigated in the future how effective this method is when the DA methods used changes or when the number of DA methods increases. | ), and CaboCha 2 as a syntactic parser. | neutral |
train_99440 | This fact makes it very difficult for WSD approaches to beat the corpus baseline." | twenty-eight word types were used in the experiments in total. | neutral |
train_99441 | We focused on the fact that these degrees of confidence are output from classifiers as the probability, and we can carry out ensemble learning by comparing them. | the method with more training data, i.e., Random Sampling, should be used for these instances. | neutral |
train_99442 | Green cows do not necessarily exist in the real world, but we can figure them out by drawing a picture. | the second one is to look at the representative cases in which SPS can be obviously vs. hardly captured, and to set up a working hypothesis building upon the findings. | neutral |
train_99443 | Unfortunately, large parallel corpora are not always available for some language pairs, or for some specific domains. | we employ the phrase-based Moses which uses different feature functions, such as direct phrase translation probability, inverse phrase translation probability, direct lexical weighting, inverse lexical weighting, phrase penalty, language model, distance penalty, word penalty, distortion weights et al. | neutral |
train_99444 | Hypothesis 3 can effectively exploit information redundancy and propagate the high-confidence results from posts with relatively simpler linguistic structures to those posts with more complicated structures. | for each unique target-target pair or unique target-issue pair in the training data, we count the frequency of the sentiment labels in the training data, f p for positive and f n for negative. | neutral |
train_99445 | (2010) identified the attitudes of participants toward one another in an online discussion forum using a signed network representation of participant interaction. | we have applied a state-of-the-art English entity extraction system (Li et al., 2012;Ji et al., 2005) that includes name tagging and coreference resolution to detect name variants from each document (e.g. | neutral |
train_99446 | The initial state of semantic tree growth is specified by the AXIOM, which sets out an initial node to be subsequently developed. | secondly, "a" is constructed so that it reflects the predicates in the whole proposition. | neutral |
train_99447 | The notion of keeping a low profile is revealed by the expression 知 白 守 黑 zhī-báishǒu-hēi, a line of classical drama in TM. | the color white on the other hand is located in the north-east which is the position of death in Chinese 風水 Fēng-Shuǐ. | neutral |
train_99448 | Speakers of Taiwanese Mandarin (TM), Taiwanese Hakka (TH) and Taiwanese Southern Min (TSM) also share some similarities in the usages of the color terms black and white. | black and white are universally perceptible to all mankind and are the only two colors at stage one in berlin and Kay's (1969) sequence of color evolution. | neutral |
train_99449 | Another approach (C. Banea.et.al, 2010) used multilingual space and meta classifiers to build high precision classifiers for subjectivity classification. | in future, we want to apply this approach on bigger datasets and also extend it to multiple class problems. | neutral |
train_99450 | Second, can language specific tools like POS taggers, Named Entity recognizers dependency can be minimized as they vary with language. | this would explain why SVM is outperformed for small training set sizes and for small feature spaces with large training sets. | neutral |
train_99451 | (5) a. Yumi 0 -ka 1 cip 2 -ey 3 ka 4 -n 5 il 6 -un 7 chamulo 8 yukamsulep 9 -ta 10 . | in order to annotate subjective expressions, all three attributes of the private state should be properly represented. | neutral |
train_99452 | As a fundamental resource for sentiment corpus construction in Korean, this work takes advantage of the Multiperspective Question Answering (MPQA) Opinion Corpus which began with the conceptual structure for private states in and developed manual annotation instructions. | the MPQA corpus distinguished different ways that private states may be expressed, i.e. | neutral |
train_99453 | (2010a)) was mainly due to the different nature of the chats as well as the higher number of participants. | after experimenting with various features, we found that contextual and high-frequency terms w.r.t. | neutral |
train_99454 | More recently, the SemEval-2010 keyword extraction task used 100 and 144 scientific articles with author and reader-assigned keywords for testing and training, respectively. | automatic topic detection would introduce errors and manual topic detection would involve high cost and time. | neutral |
train_99455 | Whereas the baseline (Section 4.1) misses this relation between 'Joseph' and 'Potiphar', coreference information would enable a link to be established between the two. | these networks present an overview of the "who", "what", and "where" in large text corpora, visualizing associations between people and places. | neutral |
train_99456 | A crucial difference between the mono-vs. non-mono-clausal approaches lies in the treatment of the RDed element. | (See J.-S. Lee 2007a,b, 2008a,b, 2010, Chung 2008a, Lee and Yoon 2009, C.-H. Lee 2009 Among various issues around the RDC are the basic word order in Koran and the grammatical relation the RDed element in the post-verbal position assumes with the rest of the construction. | neutral |
train_99457 | In the future, we will try to select more features in the adaptive process, and find their influences for the performance of adaptive classifier. | there are little information contain in one tweet. | neutral |
train_99458 | Sensei-ga boku-o home-ta teacher-NOM 1SG-ACC praise-PST 'The teacher praised me. ' | we will turn to this in the next subsection. | neutral |
train_99459 | * Taroo Moreover, the contrast in (8) below between the causative and inchoative forms may also help elucidate what the restriction is intended to capture: as (7) shows, while both the causative and inchoative alternants are fine in the active, they show a stark contrast when indirect passives are formed from them. | the scalar focus particle -mo 'even' is also added to the NQ and, as indicated, the lack of it slightly degrades the acceptability for some reason or other. | neutral |
train_99460 | To obtain this kind of ambiguity, there are two conditions to be met (Inoue 1976): (i) a verb must be such that it does not necessarily select an agent (e.g., causative/inchoative verbs); (ii) there must be a "proximate" relation (e.g., inalienable possession relation) between the subject and the object. | (Perlmutter and Postal, 1984) Second, these verbs are variable in syntactic behavior, depending on the syntactic context where they appear. | neutral |
train_99461 | Finally, when quantifying the noun, Sortal classifiers clas-sify the type of referent that is being counted, as in (1) and (2) * . | for example, in a sentence without a classifier, (N) is generated to indicate that there was no classifier. | neutral |
train_99462 | These rules addressed the indefinite determiner and numerative classifier phrase in addition to the usual numeral-classifier phrase. | with regards to cross-linguistic interests, the NTU multilingual Corpus (Tan and Bond, 2011) contains more corpora linked to other classifier languages such as Thai, Vietnamese, Indonesian and Korean. | neutral |
train_99463 | They are correlated with the difference not only in the GNC but also in the semantic or pragmatic interpretation. | we claim that sentences (27) and (28) above are the instances of such a construction. | neutral |
train_99464 | The reason why 'NtoP' After analyzing the major errors, we observed that sentiment detection regarding target product names at the instance level is crucial to the class PtoN. | the system assigns +1 to the clause/sentence in the 'Positive' class and -1 to the clause/sentence in the 'Negative' class with respect to a product instance. | neutral |
train_99465 | One might think the product name in (3) should be classified as the atemporal class because it refers to the brand and the model name of the given product. | the following case of errors as shown in (16) would be handled properly if the feature values for one product name are shared with its adjacent product name. | neutral |
train_99466 | Agreement between the noun and its coordinated dependents (when the noun is comple-mented by two coordinated adjectives, such adjectives should both agree in gender with the noun.) | by writing safe rules we have given priority to precision over recall. | neutral |
train_99467 | When dealing with agreement, teachers of Spanish as a foreign language and students usually focus on gender, considered the most difficult type of agreement to learn, probably because from the beginning, it requires much effort for the learner to know which is the inherent gender value of every noun than to choose the right number value depending on the context (although there are some morphological hints, gender is arbitrary and must be memorized). | second, agreement errors in texts can be identified and corrected straightforwardly by a native speaker, unlike other type of errors like article and preposition usage, for example, where annotator agreement may be problematic. | neutral |
train_99468 | This defect makes the comparison between candidates be "unfair" and thus less reliable. | in dependency parsing, graph-based models are prevalent for their state-of-the-art accuracy and efficiency, which are gained from their ability to combine exact inference and discriminative learning methods. | neutral |
train_99469 | So, when using parallel corpus composed of surface forms of words for training, SPE doesn't work well. | rule-based approach is better than statistical approach in the aspect of translation accuracy. | neutral |
train_99470 | The system explored two modifications to extract answer: baseline method (Baseline) using word tokenization and CRF method in the question analysis phase (KLB). | [1] Automatic question answeringthe ability of computers to answer simple or complex questions, posed in ordinary human languageis the most exciting. | neutral |
train_99471 | This result illustrated that our approach is completely reasonable. | this is not efficient to build a real system, thus we proposed building a two layer system (combine both of above strategy) and achieved result which illustrates that hybrid system is completely reasonable. | neutral |
train_99472 | These strategies are described in greater detail below, and summarized in table4 Baseline: this is a basic approach to compare with our proposed method which it only uses keywords taken from question to make query for Lucence. | the statistic relation impacts on the system precision and executed time is depended on network speed. | neutral |
train_99473 | (Hatori and Suzuki, 2011) applied the phrasebased SMT model to predict Japanese Pronunciation, however, the differences between our work and theirs lie in a visual aspects. | a Chinese character seldom has multiple pronunciations, but the same pronunciation may refer to quite a lot of Chinese characters, usually, dozens of characters. | neutral |
train_99474 | Finally, the example sentences were manually annotated emotion tags to construct a corpus. | wakamono Kotoba has various forms of expressions. | neutral |
train_99475 | In fact, it might decrease the accuracy. | the weights of other features, i.e. | neutral |
train_99476 | Significant improvements in Chinese word segmentation techniques have been obtained recently and reported accuracy rates (compared to those of human Golden Standard) have reached 98%. | we also merged the 10 phrase translation tables for each value of parameter i into one phrase translation table that we name i-merge. | neutral |
train_99477 | Otherwise, there is a need for a further way to rank these answer candidates to produce the most likely as the answer. | this is like a simple voting scheme, where each sub-question votes for an answer candidate, and the candidate with the most votes win. | neutral |
train_99478 | This is where the Switching Technique comes in. | then, in Section 4, combinations of the proposed thai sentence paraphrasing techniques used in some of the fourteen thai sentence paraphrase patterns are identified along with one particular combination explicitly illustrated in details. | neutral |
train_99479 | Then, the adverb is promoted to a new verb "สนุ กสนาน-vi/enjoy" while the old verb is demoted to a modifier for the new verb constituting the Promotion/Demotion Technique. | in this case, the Lexical Replacement pattern forces the process to specifically choose the trees not just only whose TLCSs are identical to the TLCS input but also whose syntactic structures are the same as that of the TLCS input tree. | neutral |
train_99480 | In order to annotate multiple referents, in the proposed scheme the chunk address/id of these multiple referents is specified in the 'ref ' attribute separated by a delimiter(comma). | the MAtE/GNOME project has another important scheme suitable for different types of dialogue annotations (Poesio and Artstein, 2008). | neutral |
train_99481 | In this paper we present a scheme for annotating anaphoric relations in the Hindi Dependency Tree-Bank. | it is computationally efficient to annotate the referent of NP6 as NP4 rather than NP1 since it is more nearer to NP6, hence reducing the search space. | neutral |
train_99482 | Reordering is of essential importance for phrase based statistical machine translation (SMT). | firstly, we aim to develop the phrase-based translation model to translate from English to Vietnamese. | neutral |
train_99483 | Agirre and Stevenson (2006) summarised from many WSD studies the different knowledge sources available or extracted from various lexical resources and corpora, and their realisation as different features in individual systems. | table 4: Partial confusion matrix for "degree" For the "degree" example, only Sense 1, 4 and 7 could be considered to have a reasonable number of training examples. | neutral |
train_99484 | However, it is the most frequent sense and might therefore have an advantage. | from the training instances, unigrams w -3 , w -2 , w -1 , w 1 , w 2 , and w 3 , bigrams w -3 w -2 , w -2 w -1 , w 1 w 2 , and w 2 w 3 , and trigrams w -3 w -2 w -1 , w -1 w 0 w 1 , and w 1 w 2 w 3 , were extracted as features. | neutral |
train_99485 | Yizi jiu zai ziji de pangbian ne. | a second later he realized he was watching himself through the surveillance camera system and it was his own pants that were on fire. | neutral |
train_99486 | For now, the explanation of the difference between the hierarchy concerning the availability of non-de se mode in Chinese and other language (e.g., Japanese) is still not very clear to us, and we thus leave it for future research. | we do not endorse Anand's (2006) claim on the de se and non-de se distinction of LD ziji, for we observe that LD ziji used in intensional contexts, especially in reported speech, is not obligatorily interpreted de se, either. | neutral |
train_99487 | Based on the 10-million-word Sinica Corpus, this work investigates the distribution of aspectual markers in SVCs in order to find whether there is a systematic preference for either V1 or V2 to be marked as the head. | this is found not true for Chinese (Li, 1991;Law, 1996;Matthews, 2006;Paul, 2008;among others). | neutral |
train_99488 | The phenomenon should rather be taken to be an epiphenomenon mirroring some interaction of the lexical semantics of dare-mo with its environment. | the accent on Greek emphatic n-word KANENA. | neutral |
train_99489 | Although human beings are normally assumed invariably to die sooner or later, it is easy to conceive worlds such that at least some people are immortal in them. | c. Hito-wa dare-mo yume yabure, furikaeru. | neutral |
train_99490 | We develop two models: (a) Language dependent Classifier which takes a feature vector that incorporates diagnostic scores (as shown in Table 6) and (b) Language Independent Classifier which takes feature vector with binary values as shown in Table 3. | in order to show that the tripartite classification model handles the distribution better than the bipartite model, we use a mathematical formulation which maximizes the inter-class scatter and minimizes the intra-class scatter. | neutral |
train_99491 | In Hindi as well, syntactic behavior of intransitive verbs, in many cases, depends on which subclass the verb belongs to. | the data is first partitioned into k equally (or nearly equally) sized segments or folds. | neutral |
train_99492 | This learned model can also be applied for the classification of new intransitive verbs. | unaccusatives do not allow ergative subjects (as in (2b)). | neutral |
train_99493 | We employed the phonemic approach; the Alphabet Queries were transformed into phonemes and then are transliterated. | the second is the case where non-Japanese people write English product names and this would be solved by translation. | neutral |
train_99494 | Therefore, the dictionary based approach is not so powerful for transliteration comparing with that for translation. | moreover, according to Table 3, the precisions when the CAF was used are higher than when the CF was used except the case when the CRF was used. | neutral |
train_99495 | We also noticed that the lower recall produced the lower F-score for those dialogue acts which are hard to detect. | 4 We ran 15-fold cross validation, using our 15 dialogues. | neutral |
train_99496 | With library forum chats, we found that adding keywords to contextual features improved performance over all three learners, since some terms occur only in specific dialogue acts. | hi, bye for GREETING, or excellent for EXPRESSION. | neutral |
train_99497 | This is based on the intuition that a given nominal will only co-occur with demonstratives that agree in gender. | the fact that we saw higher performance across the STEM data set than the INFL dataset was surprising, particularly for precision (χ 2 = 3.87, p < 0.05), somewhat less so for recall (χ 2 = 7.46, p < 0.01). | neutral |
train_99498 | Through this small experiment of this paper, we hope to provide a practical method for linguists to deal with data in a more skilled and dynamic way. | the prior polarity lexicons thus constructed are used as training data for cloud-based prediction model in extracting more polar words and determining the overall sentiment of Plurk texts. | neutral |
train_99499 | With those translation knowledge, we first cross-lingually align Japanese and Chinese news articles. | especially, as a topic model, this paper employs DTM, but not LDA, since it can consider correspondence between topics of consecutive dates. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.