id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_99200 | The sentence can be corrected as proposed. | in (Albert et al., 2009b), we present a system for classifying errors on the basis of syntactic criteria, i.e. | neutral |
train_99201 | The project emerges from the simple observation that these writers often encounter lexical, grammatical, and stylistic difficulties which might hinder the comprehension of their message, as well as undermine their credibility and professionalism (Ellis, 1994). | with text editors, but in the spirit of tutoring systems, we want to leave Copyright 2009 by Marie Garnier, Arnaud Rykner, and Patrick Saint-Dizier decisions as to the proper corrections up to the writer, providing him/her with arguments for and against a given correction. | neutral |
train_99202 | Moreover, our data format and data size are different from earlier research. | our entire 1324-sentence corpus consists of 1152 (87%) active sentences and 172 (13%) passive. | neutral |
train_99203 | µ T i µ i = 1). | in practice, we can determine the best number of topics by drawing such a figure. | neutral |
train_99204 | As a regression model, the discriminative information captured in the model can help find a topic space in which class labels can be generated with minimal errors. | distances among documents in the same category (intra-class variance) should be small while distances among documents in different categories (inter-class distance) should be large, and large-margin rules. | neutral |
train_99205 | LogisticLDA is a supervised topic model for multi-class classification problems by extending LDA model. | we adopt an easier to compute algorithm proposed by T.P. | neutral |
train_99206 | From the distance distribution statistics we find that the average distance of head-modifying words to the phrase boundary is only 0.75 when including phrase-internal relations, indicating that modifiers of the phrase are usually not far away. | in Section 6 we describe the features used in the experiments, and the pre-processing required. | neutral |
train_99207 | Note that words may be repeated in a review article. | we attempt to distinguish sentences with these characteristics from those without. | neutral |
train_99208 | Interestingly, the same thing happens with female corpus. | dubois and Crouch (1975:289), for example, criticized that her investigative method is introspective, asystematic, uncontrolled, and unverifiable observation. | neutral |
train_99209 | Xiao and McEnery (2005:70) compared two reference corpora, the 100-million-word British National Corpus and the one million-word Freiburg-LOB Corpus, and achieved almost identical key word lists, thus concluding that the size of the reference corpus is not very important in making a key word list. | the results become a lot poorer as we go further down from Line d. Tables 5 and 6 show that in both cases the reference of similar size to the target corpus gives the best, or one of the best results, especially when the reference corpus is smaller than the target. | neutral |
train_99210 | We will first consider the dispersion values of the female key words, focusing on those that appear in Lines b.~d. | it is necessary to measure how evenly the key words are presented in the list, namely, the dispersion. | neutral |
train_99211 | We have also proved that the properties discovered from large n-grams are not in themselves sufficient: we must acquire 'clean' knowledge by effective confidence estimation and parameter tuning. | due to the noise produced from n-grams and the limited use of specific contexts, our method had more loss in precision. | neutral |
train_99212 | Thanks to the global search over the whole dependency tree the graph-based models realized by MSTParser gain the best performance among the competitors on the English dataset. | the results on the development set indicated that the k-best (k>1) models did not surpass the 1-best one remarkably. | neutral |
train_99213 | The method using uniqueness could not improve the ranking of sentences in the topic of metabolic syndrome, because expressions in an argument point of the topic are not diverse. | similarly, statements that are known as reasons support a user's judgment on the credibility of other statements corresponding to the conclusion. | neutral |
train_99214 | In general, there are roughly two types of techniques for automatic summarization: sentence extraction and sentence compression. | as we assume that a noun is an argument point, the uniqueness of sentence 4. | neutral |
train_99215 | # ¡ % $ ' & ) ( 0 2 1 4 3 5 1 6 ! | we regard reasons and conclusions as hubs and authorities, respectively, and apply the iterative calculation between hubs and authorities to ones between reasons and conclusions. | neutral |
train_99216 | The numbers in the topic "LASIK operation is painful" are 535 and 178 respectively. | the retrieved documents are segmented into passages, and some passages including various argument points are selected and inputted to the STATEMENT MAP generator. | neutral |
train_99217 | Using semantic relations in the STATEMENT MAP, the system searches for passages representing situations that opposing statements can coexist according to the contrast structure between statements. | we assume that a user enters a statement, and he/she would like to verify its credibility, such as a query in information retrieval. | neutral |
train_99218 | The sentences with Less Logophoric antecedents obtained an even lower score (mean = 3.26). | antecedents of exempt anaphors are optimal if they can be associated with a logophoric role. | neutral |
train_99219 | Anaphors like Icelandic LDA in subjunctive complements, LD-bound Chinese 'ziji' (according to Liu 2001, though not Pollard andXue 2001) and English exempt anaphors fall into this category and show the following properties: 1) The LDA need not have c-commanding antecedent; 2) The LDAs have an antecedent outside the sentence; 3) Both sloppy and strict readings are allowed in VP ellipsis when the anaphor is LD-bound. | are genuine LDas also sensitive to logophoric factors when they are bound by LD antecedents? | neutral |
train_99220 | If so, i) 'caki' will not require c-commanding antecedents and allow discourse antecedents; ii) 'caki' will not show a preference for either the strict or the sloppy reading in VP-ellipsis contexts; iii) 'caki' may not be sensitive to logophoricity. | unlike Type II anaphors, Type III anaphors behave like a core anaphor in a local domain. | neutral |
train_99221 | In is-a relations, Case 1 could not adequately extract single patterns satisfying the seed term pairs from the mass corpus. | the reliable term pairs are selected by the reliabilities of term pairs, and these term pairs are used to extract new patterns. | neutral |
train_99222 | Suppose that we have the prenex normal form (6a) and (6b) from interpretation (2a) and (2b), respectively. | among them, we will explore the use of epsilon terms as a syntactic counterpart of choice function, as proposed by Meyer Viol et al. | neutral |
train_99223 | (2005), we assume that final scope determination requires two elements: formulae with epsilon terms like (9) and scope statements in the form of Scope(S<x<y), where S is interpreted as some temporal index. | (Abusch, 1993) Sentence (3) allows the intermediate scope reading, meaning that, for every professor x, there is a certain book y such that for every student z, x rewarded z who read y on x's reading list, in which some book takes scope over the preceding universally quantified expression every student in the matrix clause but still co-varies along with the choice of professor. | neutral |
train_99224 | At rank 2, two superlexical patterns, [ , Bill, a letter] and [ , sent, , a letter], are relatively productive. | let me explain this using a concrete case. | neutral |
train_99225 | Admittedly, such a view was not traditional either in linguistics or in related fields in cognitive science. | clearly, it needs to be examined. | neutral |
train_99226 | Recall that a pattern at rank k is the defined unification of patterns at rank k − 1. | the grammar-based models of language are free from such problems. | neutral |
train_99227 | 4 Notable exception is the phenetics-oriented phonology advocated by J. Pierrehumbert and her colleagues. | 4 the situation is changing. | neutral |
train_99228 | In addition, the causes of both jingya and zhenjing tend to involve non-motion events, and the experiencers do not usually participate in the cause events. | identifying the experiencer and the causes of an emotion is very challenging in NLP. | neutral |
train_99229 | Though this present study has provided baseline information for researchers and language instructors with the move sequences and linguistic strategies to achieve discourse function by advice givers, there needs further exploration for comprehensive investigation. | the written discourse has been less discussed. | neutral |
train_99230 | Distances between video clips is computed in a similarly than in classic DSM approaches. | this paper does not address on the later aspect of this work. | neutral |
train_99231 | In the NGT model the search process can be modified with a scale factor that affects the frequency of outputs, or even the DA language model (similar to the scale factor of the HMM-based model). | with this enhancement, the NGT model keeps information on the DA history and is competitive with respect to the HMM-based model, as the results in Section 4 will show. | neutral |
train_99232 | In case there is an available segmentation, the maximisation step is overridden and the values r and s r 1 are fixed to that provided by the segmentation. | dA sequences until turn t only affect the first t turns in the dialogue (i.e., Pr(W the terms in the product in Equation 2 are rewritten as: This model in Equation 3 can be simplified with some assumptions: the current dA depends only on the previous n − 1 dA and the sequence of words of the current segment depends only on the current dA. | neutral |
train_99233 | The corpus was divided into 5 partitions to carry out a cross-validation approach. | the probability for the child node associated to e i = w i @u j is given by No change in the computation of the probability of the child node is produced when the output is empty (i.e., p(,@b|Yes)=0.7 p=0 p=0.0392 p=0.0392 p=0.0392 p=0.0392 . | neutral |
train_99234 | The recent work with Timebank has disclosed that six-class classification of temporal relations is a very complicated task, even for human annotators. | pT is a portion of parse tree that is enclosed by the shortest path between two event arguments. | neutral |
train_99235 | On Timebank, AD polynomial composite kernel achieved the best result (i.e., over 6.2% improvement). | in some studies, several heuristics have been employed to resolve the low precision problem (Chklovski and Pantel, 2005;Torisawa, 2006). | neutral |
train_99236 | Finally, we integrate the translation f into TL's of Synset and Hypernyms (Step (4) and (5)). | an interesting approach presented by Mihalcea (2005) describes how to apply a graph-based algorithm (i.e., random walk algorithm) and WordNet semantic relations to solve all-word WSD task. | neutral |
train_99237 | Its common Chinese translation is "工廠". | models compared are described as follows: Baseline: For any given translation pair, the most frequent sense is returned. | neutral |
train_99238 | For example, other potential features can be integrated into the classification framework, such as the translations of the glosses or the definitions of the word senses. | the association between a direct hyponym (an outcome) of a synset and the translation f is governed by the conditional probability as ( procedure PropagateTranslation(e, Sense, f) (1) Synset = GetSynset(e, Sense) (2) Cnt = GetTagCount(e, Sense) where outcomes is a set of all direct hyponyms of the synset, feature i is a binary-valued function, and λ i is the weight of the feature function feature i . | neutral |
train_99239 | Selective binding works for some of the prenominal possessive modification in Japanese when NP 1 -no phrases modify one of the qualia of NP 2 , that is, selectively bind an event contained in the quale. | if the following noun is a non-relational common noun (CN) such as car, John's composes with car which is a regular (e, t) type predicate, namely, a function from individuals to truth-values (Montague, 1973), and the relation between John and car is contextually supplied (16a). | neutral |
train_99240 | On the other hand, husinsya da 'is a suspect' in (17) concerns the speaker's judgment. | as a categorical judgment, this draws attention first to the neko 'cat', and then says of the neko that it is sleeping there. | neutral |
train_99241 | I find it a defect of Kuno's analysis, as it seems quite unlikely for a language to have such a 'design flaw' in its expressive capacity. | 4 a major portion of existing analyses of wa amount to the following, two competing hypotheses. | neutral |
train_99242 | Pattern I: CANDIDATE INSTANCE (divided into) Pattern II: CANDIDATE INSTANCE (include) After extracting the hyponyms of a candidate instance using these two patterns, if many of these hyponyms exist in the goal semantic class's candidate instance set, this candidate instance perhaps is a collective instance. | the same candidate instance perhaps simultaneously exists in candidate set of the goal semantic class and that of an interference class. | neutral |
train_99243 | Pseudogapping shares its characteristics with gapping and vP ellipsis. | dOM list was devised to allow elements in sentences to change their positions. | neutral |
train_99244 | However, pseudogapping also occurs in subordination or comparative structures, as in (2) and (3). | according to Takahashi (2003), both (12a) and (12b) are grammatical. | neutral |
train_99245 | And the latter has its own MODE value, INDEX value, and non-empty RESTR. | roughly, the definition of pseudogapping is generally assumed to be the deletion of vP except an auxiliary verb and a argument or arguments. | neutral |
train_99246 | With RSS, updated episodes are automatically downloaded from the web and can be stored in any type of player. | with this function, as well as listening to a podcast the user can also view the text of the podcast. | neutral |
train_99247 | In western languages, acquisition of nouns precedes acquisition of verbs. | this is where endo-system view is called for. | neutral |
train_99248 | (9) Clitic-initial Phrases as (Wrong) Headwords: Nominal Entries ka-kyeyyak ('a provisional agreement'), kwu-ceyto ('the old system'), ki-sip (man wen) ('hundreds of thousands won'), nal-sayngsen ('raw fish'), no-kyoswu ('an old professor'), ta-mokcek ('multi-purpose'), pan-man (nyen) ('five-thousand years'), pemsahoycek ('pan-national'), pi-sayngsancek ('unproductive'), swu-chen (kay) ('several thousand pieces'), … The clitic words in (9) come before nouns and modify these nouns and, hence, can be categorized as adnominals. | not all sense units show the same degree of autonomy (Croft & Cruse 2004). | neutral |
train_99249 | Then, we encoded each sense of the terms into the n-tuple format. | (6)) is registered as a headword, we have to register kakong and hata as well. | neutral |
train_99250 | Since clitics are not phonologically independent, we can easily distinguish them from regular independent words. | the distinction is not always clear because the etymologically related meanings of a word can, over time, become so different that the original semantic relation can be obscured. | neutral |
train_99251 | Among the headwords consisting of an independent word and a clitic word, some contain the clitic in the final position. | many scholars assume that they have to be concepts rather than senses. | neutral |
train_99252 | The following section explains the grammatical functions of PL. | personal pronouns are generally closed-class and are unaffected by borrowing or code-switching. | neutral |
train_99253 | Rik Fahimer kache Gargir biruddhe kOmplein koreche (Rik-Fahim-Gen-near-Gargi Gen-against-complain-did) 'Rik has complained to Fahim against Gargi.' | not all Benglish verbs (e.g. | neutral |
train_99254 | During SVM-based training phase, the current token word with three previous and three next words and their corresponding POS along with negation or intensifier were selected as context feature for that word. | the parsed results are used in the baseline and supervised systems. | neutral |
train_99255 | The results on the development set are shown using confusion matrix in Table 7. | given a POS tagset POS, we generate new emotion classes, 'emo_ntrl-C'|C Є POS. | neutral |
train_99256 | Section 5 describes different features that include contextual information of the words, several word-level orthographic features, semantic feature and various features extracted from the gazetteers. | experimental results also justify our assumption that MOO can perform superior to the single objective approach for voting combination selection. | neutral |
train_99257 | In Kibrik 2010 is adopted a multi-factorial approach allowing to describe the integration of activation factors (that is called activation score, AS) in each moment of discourse stream. | c) RhD=3 Кто почувствовал, что засыпает? | neutral |
train_99258 | Какой номер был у бригады скорой? | we claim that the differences can be explained by the specifics of Russian material. | neutral |
train_99259 | This means that an EPP feature must be present in every phase head that intervenes between the base and scope positions of a wh-phrase in constructions such as (2-3a). | (b) Search for any subnumerations contained within the selected subnumeration. | neutral |
train_99260 | The attachment of -er transforms hang into an agent noun. | a chain can receive some sort of justification if its elision can render a remnant viable, as in an answer fragment. | neutral |
train_99261 | When GA converges to a local optimum, i.e., when f max − f decreases, µ c and µ m both will be increased. | the µ c and µ m will get lower values for high fitness solutions and get higher values for low fitness solutions. | neutral |
train_99262 | Baseline 4: C[−3, +3], P re 4 , Suf 4 , and the features (iv)-(ix). | on termination, this location contains the best feature combination. | neutral |
train_99263 | An example of such class is oodiikanaa (to wait). | further most of the attention is given to Punjabi which is written and spoken in India. | neutral |
train_99264 | Since the positive sentiment of 좋 coh is not in fact an actual thing but rather the author's hope, the value of 좋 coh is weakened to +0.5 from +1. | one typical example is a contextual intensifier. | neutral |
train_99265 | As noted earlier, these phrasal determinants are quite flexible in their distributional possibilities. | further examples clearly show this fact: i chayk-i [ku etten chayk-pota] cwungyo-hata this book-NOM the which book-than importance-do 'This book is more important than any book.' | neutral |
train_99266 | It originates from [(S radical)-ta-ko ha-n-ta '-DEC-COMP say-PRES-DEC. | in this sense, the surprise marker -ney seems to be closer to evidential and the declarative ending -ta also may be considered in this respect, but not the assertive -e. | neutral |
train_99267 | She offers a modal analysis of this typologically interesting evidential. | the following kind of S-initial reflexive anaphor caki is natural with an abstract higher semantic antecedent. | neutral |
train_99268 | That is, core contains both onset and nucleus, and syllable contains only both core and coda. | this paper also employed the type nf to solve the ambiguity problems. | neutral |
train_99269 | In this section, some actual implementation examples are shown. | the parser correctly selects the left syllable structure for the word 'nala'. | neutral |
train_99270 | Bird and Klein (1994) was virtually the first paper which tried to implement phonological analyses in typed feature systems. | the syllable structure for this word is as shown in Figure 12. | neutral |
train_99271 | Replacing focus and informational focus differ from selecting focus in that in neither case, the focused argument is previously mentioned or assumed as a possible value of the variable in the presupposed open proposition. | (in press) and found that speakers are less likely to mention the complementizer that when the complement clause is predictable. | neutral |
train_99272 | this computer-Acc you-Nom fix-Pst-Int 'Did you fix this computer?' | unlike in the subexperiment on direct object, the main effect of the factor focus type did not reach significance in the subexperiment on subject (F1(2, 177) = 6.22, p = .538; F2(2, 57) = .733, p = .485). | neutral |
train_99273 | We also evaluated the annotation quality of the corrected corpora, against ground-truth (OntoNotes does not include event answer-keys so we only focused on the ACE and carbon sequestration corpora). | for each novel event type, we identify relevant and salient sentences involving new event triggers, and then correct errors based on uncertainty estimation, and finally map semantic roles into event argument roles based on semantic frame descriptions. | neutral |
train_99274 | Measures of lexical association examine first-order similarity between features (Grefenstette, 1994). | ruch (2006) developed a system to assign headings from the MeSH and the Gene Ontology. | neutral |
train_99275 | For example, with respect to the utterance "Please pick it up" in Figure 1 (b), the system identifies that the word "it" in the utterance is the phrase "remote controller" which was recognized by the LVCSR in the previous utterance. | in the alignment process of word pairs, we select pairs which have the minimum value of the edit distance. | neutral |
train_99276 | The most critical problem of the anaphora resolution in speech understanding is insertion and deletion errors in dialogue logs, namely the existence of noise words and the lack of the antecedent. | extracting keywords and understanding an utterance using them reduce speech recognition errors (Bouwman et al., 1999;Komatani and Kawahara, 2000). | neutral |
train_99277 | (2004) have obtained high accuracy by using some speech recognizers' outputs. | we used Julius as the LVCSR and Julian as the DSSR (Lee et al., 2001). | neutral |
train_99278 | In this scoring, we set a small value to « as compared with ¬. | it needs to handle spontaneous speech utterances. | neutral |
train_99279 | The data in this study is drawn from Tamil, German and English. | shā in Chinese falls in the implied-result verb category because it accepts both patterns (A) and (B). | neutral |
train_99280 | Many sentiment words are not included in the current Chinese sentiment dictionaries. | we can see that the agreement between annotators is not high, this is because of some words are really ambiguous whether to express sentiment. | neutral |
train_99281 | Typical existing algorithms for sentiment dictionary extension use patterns to construct a graph that reflects the relation between words, clustering algorithm is then applied on this graph to get positive and negative clusters. | the existing dictionaries in Chinese are insufficient, for example, the intersection rate of two popular Chinese sentiment dictionaries HowNet and NTUSD is less than 10%. | neutral |
train_99282 | Thus, in these cases, no agent is explicit on the surface. | here, we note that the causing sub-event temporally precedes the result sub-event. | neutral |
train_99283 | Regarding examples (3) and (4), "V-te-a-ru" also expresses an eventuality to be aspectually atelic. | parsons (1990) later presented an event semantics in which English sentences were analyzed as containing existential quantification over eventualities; agent and theme arguments were also introduced by independent predicates. | neutral |
train_99284 | Even if the "V-te-a-ru" construction indicates some implicit agents, the agent cannot be overtly expressed as a subject, which takes the nominative case-marker "ga", i.e., the agent is excluded from subject positions as shown in examples (8a-c) 4 / 5 . | this kind of analysis might rest on the assumption that these sentences contain some hidden agents that are revealed in combination with the intentional active verbs. | neutral |
train_99285 | (10) a. Boku -wa zyuken-benkyoo-ga/wo zyuubun-ni si-te-a-ru. | "V-te-a-ru" constructions with a 'preparative' reading allow EXP-agent alternation: At the semantic level, e.g., in example (9c), experiencer [x], which can be realized as the topic, is an agent [y] in the causing sub-event [e 1 ], and the agentive process of sleeping causes the experiencer to have a physical state [e 2 ]. | neutral |
train_99286 | (iii)*Kare-no-kotoba-wo sinzi-te-a-ru. | (17) basically refers to an event template that is similar to (16) with the exception of the EXP-agent alternation, which is crucial to reconstruct the causing sub-event into an experiential sub-event. | neutral |
train_99287 | They are useful for term boundary identification. | in FTL, a basic classification algorithm is required to construct C 1 and C 2 . | neutral |
train_99288 | Many domain specific terms are descriptive and very long. | secondly, the applied classifier can cover more kinds of terms since different types of features are integrated in each classifier. | neutral |
train_99289 | Equivalently F is conservative iff for any set X, Y we have F (X)(Y ) = F (X)(X ∩ Y ). | indeed it is easy to show that if D is intersective then D(S)(P ) is equivalent to D(S 1 )(S ∩ P ) for any S 1 such that S ⊆ S 1 . | neutral |
train_99290 | The determiner Det denotes the function D which takes two sets, S and P as arguments and gives a truth value as result. | in the next section i show why and how the class of complex demonstratives should be extended. | neutral |
train_99291 | For instance I suppose that the meaning of the demonstrative this is such that sentences in (8) can be considered as logically equivalent: (8a) This teacher is bald. | most researchers are concerned with the question of whether the sentence of the form This CN VP, completed by speaker demonstration, expresses a proposition if the object demonstrated does not have the property expressed by CN. | neutral |
train_99292 | In philosophy it goes back to Ancient Greeks and refers to the study of nature, or more generally, to the study of what might exist (Smith, 2004). | even in the same language some terms can be treated differently. | neutral |
train_99293 | It was found that if two terms A and B are in this relation then they occur in a context like A is a (kind of) B, so a search for such pattern may yield a list of possible hyponyms. | this is performed by means of statistical semantic similarity measurement and clustering. | neutral |
train_99294 | on node 10, node 20, node 30 and node 40. | in addition, as the Japanese and Chinese kinship systems are different from that of Korean, and the Japanese and Chinese systems are not the same either, we have shown that the framework can handle the kinship systems of several different languages. | neutral |
train_99295 | Denoual's (2007) experiments attempt to translate all unknown words in a Japanese-English task and have reported that translation adequacy (NIST score) improves but fluency (BLEU score) remains stable or is decreased. | mitigating problems in analogy-based EBMT with SMT and vice-versa have shown considerable improvement over the individual approach. | neutral |
train_99296 | This approach, a type of analogical learning, was attractive because of its simplicity; and the paper reported considerable success with the method using various language pairs. | this is expected to work well with our current experimental setup. | neutral |
train_99297 | 'Now before moving orange candle in the paper bag, blue nail put in the jar.' | in our pilot developmental study on Russian material, twenty children (from 4 to 7 year-olds), twenty teenagers (from 12 to 15 year-olds), twenty students (from 19 to 25 year-olds), and twenty adults (from 40 to 50 year-olds) acted out the events of four complex adverbial sentence types that were less complicate than sentences for our main experiment described above, e.g. | neutral |
train_99298 | In the temporal domain, the relevant features are 'Time', 'Simultaneous' and 'Prior'. | hence, sentences with 'Main-Sub' order are more difficult for understanding than sentences with 'Sub-Main' order. | neutral |
train_99299 | I expect that implementation is feasible using a method of supervised machine learning such as SVM. | for every parse, all constants except for the anchor (usually on the diagonal) are abstracted by replacing them by labels such as S, O, V, and P. Replacement of constants c 1 , c 2 , . | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.