id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_800 | This would ordinarily call for a more conservative approach to avoid changes that might have unintended consequences. | evaluation methodologies must evolve to reflect the shifting interests of the research community to remain relevant. | contrasting |
train_801 | for question classification (Zhang and Lee, 2003), but these, too are inadequate when dealing with definitional answers expressed by long and articulated sentences or even paragraphs. | shallow semantic representations, bearing a more "compact" information, could prevent the sparseness of deep structural approaches and the weakness of BOW models. | contrasting |
train_802 | This reduces data sparseness with respect to a typical BOW representation. | sentences rarely contain a single predicate; it happens more generally that propositions contain one or more subordinate clauses. | contrasting |
train_803 | question classification, is evident. | the SSTK applied to one PAS extracted from a text fragment may not be meaningful since its representation needs to take into account all the PASs that it contains. | contrasting |
train_804 | Results of a perception study show that contextual cues are stronger predictors of discourse function than acoustic cues. | acoustic features capturing the pitch excursion at the right edge of okay feature prominently in disambiguation, whether other contextual cues are present or not. | contrasting |
train_805 | In our previous study (Reitter et al., 2006b), we found more syntactic priming in the task-oriented dialogues of the Map Task corpus than in the spontaneous conversation collected in the Switchboard corpus. | we compared priming effects across two datasets, where participants and conversation topics differed greatly. | contrasting |
train_806 | From the whole interaction (very rarely more than five turns), 87% accuracy can be achieved (36% of dialogues had been hand-labeled "problematic"). | the most predictive features, which related to automatic speech recognition errors, are neither available in the human-human dialogue we are concerned with, nor are they likely to be the cause of communication problems there. | contrasting |
train_807 | Apart from subject-related factors, communicative strategies will play a role. | linguistic repetition serves as a good predictor of how well interlocutors will complete their joint task. | contrasting |
train_808 | Of course, this correlation does not necessarily indicate a causal relationship. | participants in Map Task did not receive an explicit indication about whether they were on the "right track". | contrasting |
train_809 | Their experiments employed up to 20 annotators, and they allowed for the explicit annotation of ambiguity. | our annotators were instructed to choose the single most plausible interpretation in case of perceived ambiguity. | contrasting |
train_810 | Such a data set certainly contains some spurious or dubious links, while lacking some correct but more difficult ones. | we argue that it constitutes a plausible subset of anaphoric links that are useful to resolve. | contrasting |
train_811 | Note that the five-character window surrounding " " is the same in both cases, making the tagging decision for that character difficult given the local window. | the correct decision can be made by comparison of the two three-word windows containing this character. | contrasting |
train_812 | Collins (2002) proposed the perceptron as an alternative to the CRF method for HMM-style taggers. | our model does not map the segmentation problem to a tag sequence learning problem, but defines features on segmented sentences directly. | contrasting |
train_813 | Rather than a finite β with a symmetric Dirichlet distribution, in which draws tend to have balanced clusters, we now have an infinite β. | most draws will have weights which decay exponentially quickly in the prior (though not necessarily in the posterior). | contrasting |
train_814 | As can be seen in figure 2(b), this model does correct the systematic problem of pronouns being considered their own entities. | it still does not have a preference for associating pronominal references to entities which are in any way local. | contrasting |
train_815 | Since we do not have labeled cross-document coreference data, we cannot evaluate our system's crossdocument performance quantitatively. | in addition to observing the within-document gains from sharing shown in section 3, we can manually inspect the most frequently occurring entities in our corpora. | contrasting |
train_816 | With parallel corpora, the reason that research has been limited to only a few languages at a time -and usually just two at a time, as in the LSI work cited above -is more likely to be rooted in the widespread perception that good parallel corpora are difficult to obtain (see for example Asker 2004). | recent work (Resnik et al. | contrasting |
train_817 | One advantage of a 'massively parallel' multilingual corpus is perhaps self-evident: within the LSI framework, the more languages are mapped into the single conceptual space, the fewer restrictions there are on which languages documents can be selected from for cross-language retrieval. | several questions were raised for us as we contemplated the use of a massively parallel corpus. | contrasting |
train_818 | For some versions, a few of the verse translations are incomplete where a particular verse has been skipped in translation; this also explains the fact that the number of Hebrew and Greek text chunks together do not add up to 31,226. | the number of such verses is negligible in comparison to the total. | contrasting |
train_819 | Moreover, these are the three languages of the five which are written in the Roman alphabet. | we believe the explanation for the poorer results for language pairs involving either Arabic, Russian, or both, can be pinned down to something more specific. | contrasting |
train_820 | Recent automatic evaluation metrics typically frame the evaluation problem as a comparison task: how similar is the machine-produced output to a set of human-produced reference translations for the same source text? | as the notion of similarity is itself underspecified, several different families of metrics have been developed. | contrasting |
train_821 | (Claveau et al., 2003) for example use Inductive Logic Programming to learn if a given verb is a qualia element or not. | their approach does no go as far as learning the complete qualia structure for a lexical element as in our approach. | contrasting |
train_822 | Changes in the arguments or tense of the verb sometimes change the basic situation types: If SE type could be determined solely by the verb constellation, automatic classification of SEs would be a relatively straightforward task. | other parts of the clause often override the basic situation type, resulting in aspectual coercion and a derived situation type. | contrasting |
train_823 | Evert and Baroni (2006) attempt, for the first time, to evaluate LNRE models on unseen data. | rather than splitting the data into separate training and test sets, they evaluate the models in an extrapolation setting, where the parameters of the model are estimated on a subset of the data used for testing. | contrasting |
train_824 | The original RePortS algorithm assumes morphology to be concatenative, and specializes on prefixation and suffixation, like most of the above approaches, which were developed and implemented for English (Goldsmith, 2001;Schone and Jurafsky, 2000;Neuvel and Fulop, 2002;Yarowski and Wicentowski, 2000;Gaussier, 1999). | many languages are morphologically more complex. | contrasting |
train_825 | The next best system obtained an F-score of 69%. | the algorithm does not perform as well on other languages (Turkish, Finnish, German) due to low recall (see (Keshava and Pitler, 2006) and (Demberg, 2006), p. 47). | contrasting |
train_826 | In his account, competition between mid-vowel diphthongization (e.g., s[e]ntir 'to feel', s [je]nto 'I feel') and non-diphthongization (e.g., p[e]dir 'to ask ', p[i]do 'I ask') leads to paradigmatic gaps in lexemes for which the applicability of diphthongization has low reliability (e.g., abolir 'to abolish, *ab [we]lo, *ab [o]lo 'I abolish'). | this approach both overpredicts and underpredicts the existence of gaps crosslinguistically. | contrasting |
train_827 | First, it predicts that gaps should occur whenever the analogical forces determining word forms are contradictory and evenly weighted. | variation between two inflectional patterns seems to more commonly result from such a scenario. | contrasting |
train_828 | With no analogical pressure, gaps are robustly attested (τ = 6.32). | the new gaps are not restricted to the 1sg, and under this scenario, learners are unable to generalize to a novel pairing of lexeme + IPS. | contrasting |
train_829 | When there is weak analogical pressure, weighting for morphophonological similarity has little effect on the persistence and spread of gaps. | when there is relatively strong analogical pressure, morphophonological similarity helps atypical frequency distributions to persist, as shown in Figure 1. | contrasting |
train_830 | The major advantage the letter-based transducer presented in Section 3 has over the Viterbi substring decoder is its word unigram language model, which allows it to reproduce words seen in the training data with high accuracy. | the Viterbi substring decoder is able to encode contextual information in the transliteration model because of its ability to consider larger many-to-many mappings. | contrasting |
train_831 | It may seem surprising that later stages of a pipeline, already constrained to be consistent with the output of earlier stages, can profitably inform the earlier stages in a second pass. | the richer models used in later stages of a pipeline provide a better distribution over the subset of possible solutions produced by the early stages, effectively resolving some of the ambiguities that account for much of the original variation. | contrasting |
train_832 | The oracle rate decreases under all of the constrained conditions as compared to the baseline, demonstrating that the parser was prevented from finding some of the best solutions that were originally found. | the improvement in Fscore shows that the constraints assisted the parser in achieving high-quality solutions despite this degraded oracle accuracy of the lists. | contrasting |
train_833 | First, although the English subjectivity lexicon contains inflected words, we must use the lemmatized form in order to be able to translate the entries using the bilingual dictionary. | words may lose their subjective meaning once lemmatized. | contrasting |
train_834 | Moreover, the lexicon sometimes includes identical entries expressed through different parts of speech, e.g., grudge has two separate entries, for its noun and verb roles, respectively. | the bilingual dictionary does not make this distinction, and therefore we have again to rely on the "most frequent" heuristic captured by the translation order in the bilingual dictionary. | contrasting |
train_835 | In this study we develop a learning model based around the concept of iteratively predicting labels for unlabelled training samples, the basic paradigm for both co-training and self-training. | we generalise by framing the task in terms of the acquisition of labelled training data, from which a supervised classifier can subsequently be learned. | contrasting |
train_836 | We expect the recognition and disambiguation of faces to improve if many image-text pairs that treat the same person are used. | our approach is also valuable when there are few image-text pairs that picture a certain person or object. | contrasting |
train_837 | In addition, head and hand/forearm movements are used to detect group-action based segments (Mc-Cowan et al., 2005;Al-Hames et al., 2005). | many other features that we expect to signal segment boundaries have not been studied systematically. | contrasting |
train_838 | In addition, many of the non-lexical feature classes, including those that have been identified as indicative of segment boundaries in previous work (e.g., prosody) and those that we hypothesized as good predictors of segment boundaries (e.g., motion, context), are not beneficial for recognizing boundaries when used in isolation. | these non-lexical features are useful when combined with lexical features, as the presence of the non-lexical features can balance the tendency of models trained with lexical cues alone to overpredict. | contrasting |
train_839 | If the co-reference system works perfectly, the system should find a social network involving four people: {John, Tim, Mary, Mark}, and the ties: moth-erOf (Mary, John), and brotherOf (Mark, Tim). | if the co-reference system mistakenly links "John" to "his" in the second clause and links "Tim" to "his" in the first clause, then we will still have a network with four people, but the ties will be: motherOf (Mary, Tim), and brotherOf (Mark, John), which are completely wrong. | contrasting |
train_840 | Indeed, annotation experiments using very fine-grained categories show low annotation reliability (Müller, 2006). | there is no debate over the importance nor the definition of distinguishing pronouns that refer to nouns from those that do not. | contrasting |
train_841 | As expected, the precision of the extracted attributes as an average over all classes is best when the input instance sets are hand-picked (M ), as opposed to automatically extracted (E). | the loss of precision from M to E is small at all measured ranks. | contrasting |
train_842 | Compared to traditional IE, the recall of our Open IE system is admittedly lower. | in a targeted extraction scenario, Open IE can still be used to reduce the number of hand-labeled examples. | contrasting |
train_843 | Quirk and Corston-Oliver (2006) investigated the impact of parsing accuracy on statistical MT. | this work was only concerned with a single dependency parser, and did not focus on parsers based on different frameworks. | contrasting |
train_844 | Also, this framework allows for defining arbitrary similarity functions between two matching items, and we could match arbitrary concepts (such as dependency relations) gathered from a sentence pair. | most other metrics (notably BLEU) limit themselves to matching based only on the surface form of words. | contrasting |
train_845 | The 101 correct NO responses represent 12% of the 864 possible correct NO responses. | the systems responded correctly for 50% (2449/4920) of the cases when YES was the reference answer and for 61% (2345/3816) of the cases when UNKNOWN was the reference answer. | contrasting |
train_846 | As the figure 1 shows, when we increase the threshold by allowing more candidate phrase pair hypothesized as valid translation, we observe the phrase table size increases monotonically. | we notice that the translation performance improves gradually. | contrasting |
train_847 | Removing PP1 from the baseline phrase table (comparing the first group of scores) or adding PP1 to the new phrase table (the second group of scores) overall results in no or marginal performance change. | adding phrase pairs extracted by the new method only (PP3) can lead to significant BLEU score increases (comparing row 1 vs. 3, and row 2 vs. 4). | contrasting |
train_848 | Larger window size can lead to better results as shown in Table 3 and Table 4 since more contextual knowledge is used to model measure word generation. | enlarging the window size does not bring significant improvements, The major reason is that even a small win-dow size is already able to cover most of measure word collocations, as indicated by the position distribution of head words in Table 1. | contrasting |
train_849 | We have shown that VB is both practical and effective for use in MT models. | our best system does not apply VB to a single probability model, as we found an appreciable benefit from bootstrapping each model from simpler models, much as the IBM word alignment models are usually trained in succession. | contrasting |
train_850 | Most statistical parsers achieve a high robustness with respect to out-of-grammar sentences by allowing for arbitrary derivations and rule expansions. | they are not suited to reliably decide on the grammaticality of a given phrase, as they do not accurately model the linguistic constraints inherent in natural language. | contrasting |
train_851 | (2005) pursued a similar approach. | their grammar-based language model did not make use of a probabilistic component, and it was applied to a rather simple recognition task (dictation texts for pupils read and recorded under good acoustic conditions, no out-of-vocabulary words). | contrasting |
train_852 | In particular it always has much lower insertion rates reflecting its supe-rior ability to remove utterances that are not typically part of the report. | the probabilistic model suffers from a slightly higher deletion rate due to being overzealous in this regard. | contrasting |
train_853 | Ando and Lee's (2000) kanji segmenter.) | modelling only partial words helps the segmenter handle long, infrequent words. | contrasting |
train_854 | Datasets around 30K words are traditional for this task. | a child learner has access to much more data, e.g. | contrasting |
train_855 | Intuitively, compared with co-occurrence-based thesauri, hand-crafted thesauri, such as WordNet, could provide more reliable terms for query expansion. | previous studies failed to show any significant gain in retrieval performance when queries are expanded with terms selected from WordNet (Voorhees, 1994;Stairmand, 1997). | contrasting |
train_856 | Another drawback of stemming is that it usually enhances recall, but may hurt precision (Kraaij and Pohlmann, 1996). | general Web search is basically a precision-oriented task. | contrasting |
train_857 | 2) Only the head word in the noun phrase varies in form and needs to be expanded. | both assumptions may be questionable. | contrasting |
train_858 | If the expansion terms used are those that are variant forms of a word, then query expansion can produce the same effect as word stemming. | if we add all possible word alterations, query expansion/reformulation will run the risk of adding many unrelated terms to the original query, which may result in both heavy traffic and topic drift. | contrasting |
train_859 | In this paper, one of the proposed methods will also use a bigram language model of the query to determine the appropriate alteration candidates. | in our approach, alterations are not limited to head words. | contrasting |
train_860 | For simplicity, we use linear regression model here. | we denote an instance in the feature space as X, and the weights of features are denoted as w. Then the linear regression model is defined as: (3) where w T is the transpose of w. we will have a technical problem if we set the target value to the performance change directly: The range of , while the range of performance change is [-1,1]. | contrasting |
train_861 | When the query is very long and the first feature always obtains a value of log(0.5), so it does not have any discriminative ability. | the second feature helps because it can capture some co-occurrence information no matter how long the query is. | contrasting |
train_862 | For example, in Table 1, Q2 preserves certain useful information of Q1 in the aspects of both question topic (Berlin) and question focus (fun club) although it loses some useful information in question topic (Hamburg). | questions Q3-Q5 are not related to Q1 in question focus (although being related in question topic, e.g. | contrasting |
train_863 | Our approach follows Langkilde-Geary (2002) and Callaway (2003) in aiming to leverage the Penn Treebank to develop a broad-coverage surface realizer for English. | while these earlier, generation-only approaches made use of converters for transforming the outputs of Treebank parsers to inputs for realization, our approach instead employs a shared bidirectional grammar, so that the input to realization is guaranteed to be the same logical form constructed by the parser. | contrasting |
train_864 | Among syntax-based translation models, the tree-based approach, which takes as input a parse tree of the source sentence, is a promising direction being faster and simpler than its string-based counterpart. | current tree-based systems suffer from a major drawback: they only use the 1-best parse to direct the translation, which potentially introduces translation mistakes due to parsing errors. | contrasting |
train_865 | Compared with their string-based counterparts, treebased systems offer some attractive features: they are much faster in decoding (linear time vs. cubic time, see ), do not require a binary-branching grammar as in string-based models (Zhang et al., 2006), and can have separate grammars for parsing and translation, say, a context-free grammar for the former and a tree substitution grammar for the latter . | despite these advantages, current tree-based systems suffer from a major drawback: they only use the 1best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006). | contrasting |
train_866 | Ideally, a model would account for this ambiguity by marginalising out the derivations, thus predicting the best translation rather than the best derivation. | doing so exactly is NP-complete. | contrasting |
train_867 | D: However, it is implemented in the Member States is still too slow. | t: the implementation measures in Member States remains too slow. | contrasting |
train_868 | We need to do insideoutside parsing as coarse-to-fine parsers do. | we use the outside probability or cost information differently. | contrasting |
train_869 | This figure indicates the advantage of the two-pass decoding strategy in producing translations with a high model score in less time. | model scores do not directly translate into BLEU scores. | contrasting |
train_870 | Simply by exploring more (200 times the log beam) after-goal items, we can optimize the Viterbi synchronous parse significantly, shown in Figure 3(left) in terms of model score versus search time. | the mismatch between model score and BLEU score persists. | contrasting |
train_871 | We show that the "dominance charts" proposed by Koller and Thater (2005b) can be naturally seen as regular tree grammars; using their algorithm, classical underspecified descriptions (dominance graphs) can be translated into RTGs that describe the same sets of readings. | rTGs are trivially expressively complete because every finite tree language is also regular. | contrasting |
train_872 | The problem of computing the best tree is NPcomplete (Sima'an, 1996). | if the weighted regular tree automaton corresponding to the wRTG is deterministic, every tree has only one derivation, and thus computing best trees becomes easy again. | contrasting |
train_873 | To improve results, some systems utilize additional manually constructed semantic resources such as WordNet (WN) (Beamer et al., 2007). | in many domains and languages such resources are not available. | contrasting |
train_874 | Many other works manually develop a set of heuristic features devised with some specific relationship in mind, like a WordNet-based meronymy feature (Bedmar et al., 2007) or size-of feature (Aramaki et al., 2006). | the most prominent feature type is based on lexico-syntactic patterns in which the related words co-appear. | contrasting |
train_875 | Either there is a failure to distinguish between these two structures, because the network fails to keep track of the fact that John is subject in one and object in the other, or there is a failure to recognize that both structures involve the same participants, because John as a subject has a distinct representation from John as an object. | symbolic representations can naturally handle the binding of constituents to their roles, in a systematic manner that avoids both these problems. | contrasting |
train_876 | Vector addition does not increase the dimensionality of the resulting vector. | since it is order independent, it cannot capture meaning differences that are modulated by differences in syntactic structure. | contrasting |
train_877 | Another simplification concerns K which can be ignored so as to explore what can be achieved in the absence of additional knowledge. | this reduces the class of models to: this still leaves the particular form of the function f unspecified. | contrasting |
train_878 | This reduces the class of models to: However, this still leaves the particular form of the function f unspecified. | now, if we assume that p lies in the same space as u and v, avoiding the issues of dimensionality associated with tensor products, and that f is a linear function, for simplicity, of the cartesian product of u and v, then we generate a class of additive models: where A and B are matrices which determine the contributions made by u and v to the product p. if we assume that f is a linear function of the tensor product of u and v, then we obtain multiplicative models: where C is a tensor of rank 3, which projects the tensor product of u and v onto the space of p. Further constraints can be introduced to reduce the free parameters in these models. | contrasting |
train_879 | The combined model is best overall with ρ = 0.19. | the difference between the two models is not statistically significant. | contrasting |
train_880 | Only the leaf nodes of the prior's feature tree are considered, and, if no match can be found between the tuning and prior's training datasets' features, a N (0, 1) prior is used instead. | in the new approximate hierarchical model, even if a certain feature in the tuning dataset does not have an analog in the training dataset, we can always back-off until an appropriate match is found, even to the level of the root. | contrasting |
train_881 | These methods extract lists of phrases, which are analogous to the keyphrases we use as input to our algorithm. | our approach is distinguished in two ways: first, we are able to predict keyphrases beyond those that appear verbatim in the text. | contrasting |
train_882 | This is a reasonable assumption in the Corel dataset, where the annotations have similar lengths and the words reflect the salience of objects in the image (the multinomial model tends to favor words that appear multiple times in the annotation). | in our dataset the annotations have varying lengths, and do not necessarily reflect object salience. | contrasting |
train_883 | To give a concrete example, let us assume that for a given image our model has produced five annotations, w 1 , w 2 , w 3 , w 4 , and w 5 . | according to the LDA model neither w 2 nor w 5 are likely topic indicators. | contrasting |
train_884 | Naturally, parse errors result in (slightly) mislocated scopes but we had the general impression that state-of-the-art parsers could be used efficiently for this issue. | this approach requires a human expert to define the scope for each keyword separately using the predicate-argument relations, or to determine keywords that act similarly and their scope can be located with the same rules. | contrasting |
train_885 | Space constraints do not allow us to present either the derivation or a detailed description of the sampling algorithm. | note that the conditional distribution used in sampling decomposes into two parts: where v', r' and z' are vectors of assignments of sliding windows, context (global or local) and topics for all the words in the collection except for the considered word at position i in document d; y is the vector of sentiment ratings. | contrasting |
train_886 | finer-grained semantics are needed for the objects than subjects of verbs. | the parsing strategy is very simple, as we just substitute words by their semantic class and then train statistical parsers on the transformed input. | contrasting |
train_887 | While it could be handled with extra categories, such as (s/(vp/np))/(s/np) for what, this is exactly the sort of strong-arm tactic that inclusion of the standard B, T, and S rules is meant to avoid. | the standard CCG analysis for English auxiliary verbs is the type exemplified in (16) (Steedman, 2000, 68), interpreted as a unary operator over sentence meanings (Gamut, 1991;Kratzer, 1991): this type is empirically underdetermined, given a widely-noted set of generalizations suggesting that auxiliaries and raising verbs take no subject argument at all (Jacobson, 1990, a.o.). | contrasting |
train_888 | Thus, there is neither syntactic nor semantic evidence that hacer takes an object argument. | on this basis, we assign hacer the category (23): (23) hacer (s\np)/s : λP λx.cause P x Spanish has examples of cross-conjunct extraction in which hacer hosts clitics: This shows another instance of the schema in (1), which is undefined for any of the combinators in 3 The preceding data motivates adding D rules (we return to the distribution of the modalities below): To illustrate with example (10), one application of >D allows you and can to combine when the auxiliary is given the principled type assignment s/s, and another combines what with the result. | contrasting |
train_889 | The flat structure described by the Penn Treebank can be seen in this example: (NP (NN lung) (NN cancer) (NNS deaths)) CCGbank (Hockenmaier and Steedman, 2007) is the primary English corpus for Combinatory Categorial Grammar (CCG) (Steedman, 2000) and was created by a semi-automatic conversion from the Penn Treebank. | ccG is a binary branching grammar, and as such, cannot leave NP structure underspecified. | contrasting |
train_890 | Because this AND-OR tree represents only two different parses, the original parse and the depassivized version, only one OR node in the tree has more than one child -the root node, which has two choices, one for each parse. | the AND nodes immediately above "I" and "chance" each have more than one OR-node parent, since they are shared by the original and depassivized parses 1 . | contrasting |
train_891 | Lexical chain is only based on similarities between lexical items in contiguous sentences. | in our approach, the linkage is based on the existing conversation structure. | contrasting |
train_892 | Figure 1(b) shows the fragment quotation graph of the conversation shown in Figure 1(a) with all the redundant edges removed. | if threading is done at the coarse granularity of entire emails, as adopted in many studies, the threading would be a simple chain from E 6 to E 5 , E 5 to E 4 and so on. | contrasting |
train_893 | With a strict notion of equivalence, there are no comparable rules. | the class S → PP NP ADVP VP, with 198 members, is highly similar, indicating more confidence in this correct rule. | contrasting |
train_894 | On the surface, our model may seem as a special case of Cohen and Smith in which α = 0. | there is a crucial difference: the morphological probabilities in their model come from discriminative models based on linear context. | contrasting |
train_895 | In the reference transcriptions and alignments used for scoring ASR systems, contractions are treated as two separate words. | aside from speech rate, our prosodic features were collected using word-by-word timestamps from a forced alignment that used a transcription where contractions are treated as single words. | contrasting |
train_896 | Based on the original annotations, all human name translations were much better than our SMT system. | based on our re-annotation, the results are quite different: our system has a higher NEWA score and better name translations than 3 out of 4 human annotators. | contrasting |
train_897 | There is no principled reason why this could not be done, i.e., why one could not design an HDP framework that simultaneously learns both the fragments (as in an adaptor grammar) and the states (as in an iHMM or iPCFG). | inference with these more complex models will probably itself become more complex. | contrasting |
train_898 | In the context of tagging, there are several studies that utilized word clusters to prevent the data sparseness problem (Kazama et al., 2001;Miller et al., 2004). | these methods cannot produce the MN clusters required for constructing gazetteers. | contrasting |
train_899 | (1999) and Torisawa (2001) showed that the EMbased clustering using verb-MN dependencies can produce semantically clean MN clusters. | the clustering algorithms, especially the EM-based algorithms, are computationally expensive. | contrasting |
Subsets and Splits