id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_99800 | As shown in Table 5, Weibo users tend to have the value "Curiosity" than Japanese Twitter users. | furthermore, we proposed a dynamic way to update the Twit-terSocietas Model automatically based on Twitter data. | neutral |
train_99801 | 5 Related Work The earlier work by (Zamal et al., 2012) proposed an approach of Twitter users' latent attributes inference including gender, age, and political affiliation with Twitter. | the wording on Twitter changes frequently, and the keywords used for inference in this model may be out of date. | neutral |
train_99802 | 4 Moreover, Naive bayes classifier was employed as an alternative choice for classifying Twitter data. | a data mining approach like microblog analysis, which is considered to have a sampling bias problem, as checked by (Mislove et al., 2011), can get objective "answers" for the "questions" without subjective biases. | neutral |
train_99803 | A new value v new of the feature come to be obtained by multiplying the weight: Chen's method uses a correlation coefficient to define the weight. | in a classification problem, let (1) We can solve the classification problem by estimating the probability P (c|x). | neutral |
train_99804 | Test data is also unlabeled data, but the former is smaller than the latter. | although the class distribution of labeled training data is uniform in each domain, Class distribution of the test data which can fit the problem of reality was set to be different in each domain. | neutral |
train_99805 | The weight Chen defined can be regarded that measured the similarity of the label distribution P s of feature f in source domain and label distribution P t of feature f in target domain. | here, according to the similarity between the label distribution of the feature on source domain and the estimated label distribution of the feature on target domain, we set the weight on the features to reconstruct the training data. | neutral |
train_99806 | In this experiment, the amount of unlabeled data is 1.5 times of the amount of test data. | when the difference between the domains is small, it is realistic that the problem of domain adaption is simply regarded as data sparseness problem. | neutral |
train_99807 | In general, the supervised learning is used to create a classifier which is usually using a learning algorithm such as support vector machine (SVM) by labeled training data, then it is possible to identify the label of the test data using this classifier. | as a result, we define the new value v new of the feature as follows: if v new is a negative number after minus 1, v new = 0. | neutral |
train_99808 | 2012 termining the similarity of two words. | intelligence officials in Washington warned lawmakers a week ago to expect a terrorist attack in Saudi Arabia, it was reported today. | neutral |
train_99809 | (2012) is based on the integration of eight metrics (TER, TERp, BADGER, SEPIA, BLEU, NIST, METEOR, and MAXSIM). | the translation edit rate (tER) metric (Snover et al., 2006) supports standard operators, including shift, substitution, deletion, and insertion. | neutral |
train_99810 | It contains 5801 sentences pairs including 4076 for training and the remaining 1705 for testing. | the accuracy was very low (72%). | neutral |
train_99811 | Machine learning with unbalanced data usually leads to generation of a wrong classifier. | <m p>The music is incredibly powerful sound!</> Positive (p) about the aspect "Music (m)" <s n>It lacks a feeling of accomplishment after finishing.</> Negative (n) about the aspect "Satisfaction (s)" <g,s p,p>Since the graphics was beautiful, we got the satisfaction from just watching them.</> Combined tags are acceptable: Positive (p) about the aspects "Graphics (g)" and "Satisfaction (s)" The vector value of a word is computed as follows: where num ij and sent(asp i ) denote the frequency of a word w j in an aspect asp i and the number of sentences belonging to an aspect asp i , respectively. | neutral |
train_99812 | Applying multi-label learning such as (Zhang and Zhou, 2007) to the task is one of the most interesting approaches although we used a binary classifier based on SVMs. | all sentences in a review do not always contribute to the prediction of a specific aspect in the review. | neutral |
train_99813 | appear together in case of French and Italian restaurants signaling, perhaps, a long wait to get seated at such restaurants. | these aspect seed words are chosen manually which are, again, domain-specific. | neutral |
train_99814 | Saurí (2008) developed a rule-based method to recognize the factuality of events, whose negation recognition can be regarded as a subtask. | saurí and Pustejovsky (2007) defined three markers: polarity particles such as "not" or "no," modal particles such as "may" or "likely," and situation selecting predicates such as "prevent" or "suggest." | neutral |
train_99815 | The comparisons between the "ML" and "ML+all" and "Marneffe12" and "ML+all" suggest that n-gram clusters successfully generalize complex negations forms by their cluster IDs. | it seems quite difficult to write rules that cover the complex negation forms exemplified above. | neutral |
train_99816 | "Sentiment Analysis" (Pang and Lee, 2008) has been researched on various corpus for years, such as product reviews (Wang et al., 2010), movie reviews (Whitelaw et al., 2005). | sina Weibo is a Chinese leading social network akin to Twitter. | neutral |
train_99817 | Social media websites, such as Twitter and Facebook, have generated a great amount of public opinions on a variety of issues, especially hot events and emergencies. | we apply the best C2 model to predict the polarity for each tweet in Corpus. | neutral |
train_99818 | In order to make sure whether there is a relationship between the real mood curve and the stock index curve, we annotated another larger dataset (denoted as Sample). | public mood on social events always goes to extremes. | neutral |
train_99819 | In this talk, I will describe my forays in the analysis and processing of text corpora largely written in the Filipino language. | this fact is proof that the socio-political landscape of the times shapes the language of its people. | neutral |
train_99820 | Acknowledging that language plays a vital role in the formation of a national identity, and hence, should be cultivated towards intellectualization, I propose that ICT could effectively be used to monitor language usage and provide added insights for language planners, in understanding the interplay between language and socio-political developments. | it has shown rapid and significant changes in its vocabulary, orthography and grammar, thanks to the Philippines' rich colonial history and the conscious efforts in the national and institutional levels to standardize the grammar and orthography of the language. | neutral |
train_99821 | As a measure of the working memory capacity, the Japanese version of the reading span test was conducted (Osaka and Osaka, 1994). | moreover, identifying if readers can resolve a bridging reference with their own knowledge is important for user-oriented information extraction and document summarization. | neutral |
train_99822 | The TOTAL is the total duration that the gaze spends within the area of interest. | this subsection presents information structure annotation. | neutral |
train_99823 | Whereas the definiteness affected only TOTAL of the eye-tracking data, the specificity affected SELF, FPT, RPT, and TOTAL. | the animacy is a category about whether referents are alive. | neutral |
train_99824 | We investigated the reading time (logtime) of NPs that were annotated with the information structure labels. | this allowed more natural reading because each participant could freely return and reread earlier parts of the text on the same screen participants were not allowed to return to previous screens. | neutral |
train_99825 | (26) a. before INT(ne(ϕ))↓: B c S B A ϕ ∧ ¬B c S ϕ b. after INT(ne(ϕ))↓: P B c S B A ϕ. | the effects of no, yo, and ne on CCPs of FIs and, as a consequence, speech act felicity conditions are shown below. | neutral |
train_99826 | 1179 and Longman Spoken and Written English Corpus (Biber et al., 1999, pp. | bensal (2012) noted that active voice is more preferred than the passive voice, for it allows the speaker to express himself/herself in a more direct and emphatic manner. | neutral |
train_99827 | The motivation of such a study is both to find an effective model of stylometric study without annotation and processing, as well as to test the effectiveness of the linked data approach to stylometric studies. | the tone motifs and word length motifs are both lexical feature that can be linked from other lexical resources and do not required annotated texts. | neutral |
train_99828 | Both of these parsers are language-independent, which allows any language to be used in the parser without any compromise in accuracy. | this research used 100 Indonesian sentences from IDENtIC (Larasati, 2012) as the treebank. | neutral |
train_99829 | In the third scenario, we compared the performances of ensemble parsers that use different algorithm combination. | voting system with unweighted scheme has a little higher accuracy than others (0.01%), because the resulting graphs are not reparsed, which make the individual dependency accuracy better than those that use reparsing algorithm. | neutral |
train_99830 | thought 'Ken thought that Naomi was a fool.' | it is interesting that kare-ga 'he-NOM' co-occurs with NP-no koto-o. | neutral |
train_99831 | Interestingly, (30b) licenses RTO as shown in (31). | we can give the feature specification of the sentence in (31) as (32). | neutral |
train_99832 | For example, let A = ⟨a, b⟩ Returning to the word order of (10a), the following feature structure (14a) and (14b) can be applied. | it is licensed in (4b) with futot-teiru 'being fattened', though the predicate is neither the adjectives or nominal + copula da form. | neutral |
train_99833 | Due to our method is initial step of NLP task, in this experiment we use translation accuracy of building SMT system as evaluation of our method. | in our method, we take the following strategy: 1. | neutral |
train_99834 | Moreover, (Pham and Le-Hong, 2017) used a combination of Bi-LSTM, CNN, and CRF that achieved the same performance with (Le-Hong, 2016). | these additional features consist of part-of-speech (POS) and chunk tags that are available in the dataset, and regular expression types that capture common organization and location names. | neutral |
train_99835 | (1) Ken-ga kami-ga nagai K-NOM hair-NOM long 'Ken's hair is long.' | it must be worked out how displacement of nonleftmost ga-NPs in <NP 1 , NP 2 , …, NP n > (in the sense of (9)) is precluded. | neutral |
train_99836 | Recently, some research also showed that treating the parse tree as latent variables (Loehlin, 1998) can benefit the BTG tree inference but for preordering (see Figure 2). | the BtG terminal rule (t : X → f /e) is used to translate the source phrase f into the target phrase e while the straight and inverted rules (S : X → [X 1 X 2 ] and I : X →< X 1 X 2 >) are used to concatenate two neighbouring phrases with a straight or inverted order as following: where • stands for concatenation between strings. | neutral |
train_99837 | They manually examined these posts and obtained a list of 28 tags relating to eating disorder and anorexia. | the tags are rarely used and are overpowered by the tags found in Figure 4.4b that they are frequently using. | neutral |
train_99838 | In this case, people who are lying rarely using that kind of words because at the time they are lying, they have to think carefully in order to make their lies to be as perfectly possible. | after that, we translate the transcription using automatic machine translation for Indonesian-English. | neutral |
train_99839 | We may also regard the "newly" discovered [tʃ] as an allophone of /ts h / in the Cantonese inventory, especially for the 20s generation. | as for the effect of vowel context, the substitution of /tʃ/ only occured after vowels of /ɔ/ and /u/ (p<.001), which are all back vowels (see Table 2). | neutral |
train_99840 | For the control group, 100% sound were pronounced as /t h / as predicted, and we did not see any of the tokens with a palatalized sound change. | descriptive and pedagogical literature has shown the phonetic similarity of these alveolar and post-alveolar affricates [5] (which is an underlining support to the clear-cut differences of these sounds), the real Cantonese speech by young generation suggests something different. | neutral |
train_99841 | 2006, Han and Storoshenko 2012, Kim 2013, as exemplified in (3). | like-Prs-Decl-Comp think-Prs-Decl 'Ii think Chelswuj likes me*i/himselfj.' | neutral |
train_99842 | Below we display the relevant examples. | sohng (2004) argues that caki has inherent Φ-features with a third person. | neutral |
train_99843 | Like the English will, in Korean and Mandarin Chinese, -(u)l kes-i and hui are used to express prediction. | from the corpus-based investigation, it is noticed that hui tends to entail a causal relationship, often indicating generality and habituality abundantly in the causal construction but also in the conditional construction, albeit fewer in number. | neutral |
train_99844 | Chang 2000, Hsieh (2002), Liu (1996: 40-51), etc. | for simplicity and clarity, the scope of the investigation of this paper is limited to conditional and causal complex phrases, since -(ul) kes-i is often realized in single phrases as a continuity of causal or conditional statements, as in "Drinking two grams of cyanide causes death", which is approximately the same as saying "If somebody drinks two grams of cyanide, they will die" (Puente, et al. | neutral |
train_99845 | (23) yinwei yidan huan ganbing, baiyanqiu Because-once-have-liver:disease, whites de bufen jiu hui chuxian huangdan Part-area-then-Mod-appear-jaundice 'Because once (you) get liver disease, the whites of the eyes will become yellow.' | in Mandarin Chinese, when expressing a causal conjunction with a causal connective yinwei, hui cannot have an epistemic meaning that expresses the speaker's epistemic assumption but still encodes a linkage between propositions in which q is contingent on p as in (13): (13) yinwei you ai, cai hui qidai because-exist-love, only:then-Mod-expect 'We expect because there is love.' | neutral |
train_99846 | Crucially, the nominal bases involved are singular (or non-plural) forms, as in (14). | i remain agnostic about whether the information in the qualia structure is part of lexical knowledge or not. | neutral |
train_99847 | I will argue in section 3 that they are stative participles in the sense of Embick (2003Embick ( , 2004. | 22 The alternative with non-syllabic -ed poses no problems. | neutral |
train_99848 | 22 Specifically, on the assumption that syllabic -èd appears as a result of Root-determined contextual allomorphy, leggèd is predicted not to appear in denominal adjectives owing to the VI rule in (19)b. | second, we argue that the adjectivizing suffixed has no contextually determined allomorphs in denominal adjectives. | neutral |
train_99849 | First, the -ed suffix of denominal adjectives behaves in the same way as that of adjectival and verbal passives in displaying phonologically conditioned allomorphy, as shown in (16). | i follow Arregi and Nevins's (2014) analysis of pluralia tantum nouns, where these nouns are assumed to have their n head specified for [−group], and, if Num is present in structure, they must appear with the Num head specified as [−singular]. | neutral |
train_99850 | unpredictable from their putative nominal bases. | the adjectives with syllabic -èd in 20 Marantz (2001). | neutral |
train_99851 | Since the Major Subject (MS) regularly alternates with the Possessor of a sentence with a single Subject, it was natural to restrict the range of non-Subject GRs in that way. | this could be due to a couple of reasons. | neutral |
train_99852 | So treating it as a guided principle is the maximal value / maximal expectation. | we find that 6.5% (304 sentences) of the sentences which contain bale and 5.2% (452 sentences) of the sentences which contain eryi cannot be used interchangeably. | neutral |
train_99853 | Benveniste (1971) argues that language is the instrument of communication and taken over by the man who is speaking and within the condition of intersubjectivity. | chu (1986) and Liu (2000) believe that eryi expresses the mood of limitation. | neutral |
train_99854 | Section 4 presents the data analysis and results. | people can use a lot of methods to express their mood and tone. | neutral |
train_99855 | More specifically, while denotation represents the semantic meaning of the word, connotation often refers to the style and interpersonal emphasis of its usage in a specific context. | after the frequently used verbs were identified, the usage notes of dictionaries and thesarus demonstrating the fine differences of the verbs were employed in a two-part representation for lexical differentiation (DiMarco et al., 1992). | neutral |
train_99856 | English and Italian source sensory domains in frequency-decreasing ordering (%), adapted from Strik Lievers 2015On the other hand, the frequency of target modes in Korean synesthetic transfers is similar to the finding of Strik . | 2013, she found large-scale data results and more clearly presented that the so-called principle of directionality just reflects the "frequency" of synesthetic connection types, adding a few interesting interpretations about the motivation of English and Italian synesthetic mappings. | neutral |
train_99857 | This author also believes that it could be the journalist's communicative strategy to accommodate a wider audience particularly the foreign ones who cannot decipher the meanings of local terms. | this paper demonstrates a trend of nativization of English in a rural area as seen in a local daily. | neutral |
train_99858 | When asked about the benefits of using Facebook groups in the class, reasons why they liked it, and what challenges they encountered in using it, students reported a variety of responses, as shown in Table 4 below. | social media such as networking sites (sNss) are now part of the lifestyle of today's learners who are technosavvy and adept at maneuvering networked systems. | neutral |
train_99859 | Parallel corpus is a valuable component needed in SMT to train models, optimize the model parameters, and test the translation quality. | since Japanese and Korean has the most similar characteristics in grammar structures (Kim and Dalrymple, 2013), these additional techniques will also be explored as additional processes. | neutral |
train_99860 | For ID-KR translation, the dictionary translation help to translate the untranslated verb, such as "tidak yakin" as "아니다 확신하는", this translation is incorrect as a phrase. | it has been reported that direct MT model gives better performance compared to pivot MT model (Costajussa et al., 2013). | neutral |
train_99861 | Tokenization for Indonesian corpus is based on spaces with the addition of tokenization to a word containing prefix ("ku-" and "kau-") and containing suffix ("-ku", "-mu" and "nya"). | the translation process is employed by using n-gram matching (from 3-gram to 1-gram). | neutral |
train_99862 | pianpianneng 偏 偏能,pianpianmeiyou 偏偏没有,pianpianxian 偏偏先,pianpianhen 偏偏有些. | if the subject is sent by a bidding team to represent them at the important final interview (instead of other team members). | neutral |
train_99863 | In terms of filler-gap dependencies, there exists a gap position, which is the argument of an embedded verb, and an antecedent (or filler), which indicates the sentenceinitial wh-phrase in (1). | this indicates that the speaker obeys the wh-LF-movement to ban the embedded wh-word to have a matrix scope. | neutral |
train_99864 | Following Saito (1989), the scope for wh-words is the entire sentence in nonisland sentences. | although her study proved the existence of wh-island, the preference to yes/no-reading would not be accepted as reliable if participants considered the tested sentences as unacceptable. | neutral |
train_99865 | We also develop an annotation system that cooperates with a crowdsourcing service. | this approach is extremely work intensive. | neutral |
train_99866 | NLP researchers have built corpora for various NLP tasks through crowdsourcing. | our ultimate goal is acquisition of real-world causal knowledge by exploting Wikipedia as an encyclopedia. | neutral |
train_99867 | Methodologies used in researches concerning automatic document categorization are unique from language to language, depending on the structure and morphological rules of the specific language. | a 10-fold cross validation scheme was used to validate the performance of the multinomial SVM classifier. | neutral |
train_99868 | Based on Table 2, the classifier was able to yield relatively high F-Scores, except that of Terrorism which yielded an F-Score of only 78.78%. | feature selection is language-dependent. | neutral |
train_99869 | The separation frequency of Mainland 把关 baguan (45.74%) is significantly higher than that of Taiwan counterpart (1.19%), with a likelihood ratio of 38.437, indicating that 把关 baguan is about 38 times more likely to be used separately in Mainland than in Taiwan. | the results prove that empirically compared to separable VO compounds, inseparable ones are more likely to be used in a transitive way. | neutral |
train_99870 | separation usages can only be detected in Mainland corpus). | if a VO sequence is less lexicalized, its probability of taking an object is higher. | neutral |
train_99871 | For the middle layer, we use Long Short-Term Memory (LSTM) for each model of biRNN with 200 hidden states, and FFNN with a unit size of 200 and 100 from the near side of the input layer. | by using these approaches, word segmentation is not necessary, and the vocabulary to be handled is reduced; however, there are few studies that use the attention mechanism in character-based approaches. | neutral |
train_99872 | The score for the t-th character score t is calculated as follows: By using score t , we can calculate the weight W t and MeanVector attention a m in a similar way to that in section 2.2.1. | negative samples were randomly sampled, so some positive samples are included in the data. | neutral |
train_99873 | ( 2015) and Dhingra et al. | overall, our method (MeanVector attention with multitask learning) achieved an F-measure of 0.627 in F-measure, which is 0.037 higher than baseline method. | neutral |
train_99874 | Assuming the words "delay" and "train" are included in the keywords, the tweet "xxx line is delayed by accident," which can be used as a news source, can be extracted. | these systems analyze tweets as information sources and extract useful information to assess the damage caused by large-scale disasters. | neutral |
train_99875 | However, extracting useful information for news writers from the vast amount of social media information is laborious. | it has been reported that by using multi-task learning, the model can be generic and accurate (Luong et al., 2015b;Søgaard et al., 2016). | neutral |
train_99876 | Our method analyzes each character in a tweet by using a Recurrent Neural Network (RNN) and then decides whether the tweet includes important information. | our method is character-based approach, not a word-based one. | neutral |
train_99877 | Then, by using Word2Vec (Mikolov et al., 2013), each unit is converted into a 200-dimensional distributed representation. | for example, the method learned tweet including phrases related to fire like a "対岸 の火事 (the fire on the other side)" as a negative example. | neutral |
train_99878 | When the features were added, the positive and negative criteria were clarified. | a lot of effort is required to find valuable information from among the large number of tweets sent every day. | neutral |
train_99879 | In characteristic phrases, we exclude titles that contained common verbs or adjectives such as "生 きる (live)" and single-character titles such as "江 (Gou)." | when largescale incidents or accidents occur, secondary tweets such as retweets often occur, so the absolute number of tweets judged to be positive samples increases. | neutral |
train_99880 | Enlarging the training data (increasing from 130k to 456k parallel sentences) improved both SMT and NMT models. | for neural machine translation, one of the basis frameworks is the encoder-decoder (Cho et al., 2014;Sutskever et al., 2014). | neutral |
train_99881 | To do this effectively, we uses KyTea. | all verbs and adjectives in the corpus are separated into the stem and the desinence. | neutral |
train_99882 | We show an example of KyWSD execution in Figure 2. | if the tag is the part of speech, KyTea learns a general morphological analysis model. | neutral |
train_99883 | This problem has been ignored in conventional Japanese WSD. | a sense of the word not appearing in the training data is assigned by the dictionary. | neutral |
train_99884 | Moreover, as explained above, ignoring senses not in the sense list provided by that task, there are 6,986 correct senses for 8,953 answered instances. | in this paper, we introduced the Japanese all-words WSD system called KyWSD, which we produced and launched. | neutral |
train_99885 | A word is given a sense using KyWSD. | kyWSD works under the operating system supported by kyTea, Linux, Windows, and Mac OS. | neutral |
train_99886 | Second, tones can distinguish singulars from plurals grammatically (See Table 3). | my current study tried to answer three research questions, namely (1) how many surface contrastive tones does Twic East Dinka have; (2) What functions do tones have in phonology, morphology, syntax or semantics and (3) What are the contexts of tone sandhi in Twic East Dinka? | neutral |
train_99887 | Fifth, a suffix [ɛ] with two kinds of tones indicates whether objects are visible and can be pointed at (i.e., deixis, distance information) (see (81)-(84)). | this tone sandhi happens only when both N1 and N2 are singular. | neutral |
train_99888 | One disadvantage of chunk-wise translation is it cannot capture the con-text beyond each chunk's boundary. | this approach utilizes dialect embeddings, namely vector representations of Japanese dialects, to inform the model of the input dialect. | neutral |
train_99889 | As we saw in the previous section, the dialects in geographically close regions are generally more similar to each other than those in other regions. | the use of similar dialects has been found to be helpful in learning translation models for particular dialects. | neutral |
train_99890 | Regarding the dialect label order used for the input, our preliminary experiments indicated that the best models were obtained using input sequence (d) ( Table 1) for dialect-to-standard translation and input sequence (c) for standard-to-dialect translation. | the multi-dialect model demonstrated drastically improved the translation performance. | neutral |
train_99891 | If majority of the models predict that the tag is different from the given tag of that word, that particular tag can be considered erroneous. | there is a need to build a model that reduces human effort. | neutral |
train_99892 | The relative clauses (MSa) tend to require a shorter reading time than the appositional clauses (MSb) in TOTAL. | unlike self-paced reading, during eyetracking, all segments were shown simultaneously. | neutral |
train_99893 | This paper presents a contrastive analysis between reading time and clause boundary categories in the Japanese language. | we modelled the false class boundaries. | neutral |
train_99894 | Figure 1 illustrates how the word representations of soft cheese, soft drink, and soft iron change from the initial representation of the word soft as a starting point. | in the example shown in Figure 1, since there are cheese, drink and iron as context-words of the target word soft, our approach generates word representations for each pair such as soft cheese, soft drink, and soft iron. | neutral |
train_99895 | Moreover, the dimension was reduced to 200. | the use of the connected data of the original data and the encoded data is a feature-based method. | neutral |
train_99896 | The simplest way to calculate the ratio is directly estimate P S (x) and P T (x), but in the case of complex models, the problem will be more complicated. | instance-based methods have not been studied as much as feature-based methods. | neutral |
train_99897 | The problem of entailment is considered a problem of classification with two classes, a class "YES" and a class "NO." | the system utilizes techniques from Information Retrieval and Natural Language Processing to process a collection of Arabic text documents as its primary source of knowledge. | neutral |
train_99898 | In this paper, we propose to construct a semantic and logical representation of Arabic texts (the question and the passages of texts). | we have obtained an accuracy of 74%, which is very encouraging compared to the size of the tag set used till now. | neutral |
train_99899 | The content criterion assesses the generated online persona and the context of the story. | the thoughts element contains the user-generated content in a post's "What's on your mind?" | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.