id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_99600
The list of key words is then used to query the related tweets through the Twitter search API 2 to collect the related tweets.
table of the most associated documents to the training set is obtained.
neutral
train_99601
The two moves are complementary as strategies adopted to describe registers and registerial variation; and they need to be linked up through a chain of inter-stratal realizations (cf.
the realization of rhetorical nexuses is gradually "pushed down" in the lexicogrammar from cohesive sequences of clauses and clause complexes to clauses, phrases and groups.
neutral
train_99602
A third reason is that some keywords may be linked to more than one item in the T10N list.
this paper will take a careful look at this question again, using similar but more fine-tuned sets of data and different sets of tools for keyword extraction and interpretation.
neutral
train_99603
In the discussion on causatives above, an unambiguous trigger would be input that showed the availability of the low adverbial reading: this input is compatible only with the Verb-selecting hypothesis and not with the Root-selecting hypothesis.
further, the model is able to learn on the basis of as few as 4 tokens of input.
neutral
train_99604
!John is grumpy (high reading) !Bill is grumpy (low reading) b.
the learning mechanism will wait until an unambiguous trigger occurs in the input before setting any parameter value.
neutral
train_99605
The verbal passive participle given can assign two theta-roles.
in short, i claim that the passive morpheme in the pseudo-passive is the adjectival passive-en, which is empirically supported by the fact that they display the properties of adjectival passives.
neutral
train_99606
It appears that given (23), there is no room for theta-role percolation.
advantage was taken of Mary's honesty.
neutral
train_99607
If take advantage of is a constituent, the preposition of cannot assign Case to Mary's honesty, and furthermore, nor can the passive morpheme -en assign Case to it.
if so, the Theme role of upon percolates, and it must be identified as Agent when sit upon is merged with the Agent-assigning v. the Theme and the Agent cannot refer to the same object: one cannot sit upon oneself.
neutral
train_99608
A plausible approach to this peculiarity is to argue that in (1a) sit upon is a constituent, and the hat is the complement of sit upon, not upon (Radford 1988, Drummond & Kush 2011.
12 (34b) and (35b) show that the pseudopassive is permitted if the passive describes the characteristic of the raised Theme although it is not affected, (34) a.
neutral
train_99609
It is construed as an E-type pronoun, as found in the similar structural context of (8a-b) in English.
in our analysis, the wh-expression nwu-ka 'who' in the second conjunct clause of (20) changes into an empty pronoun that is construed as a sloppyidentity one in the interpretive component.
neutral
train_99610
b. John asked me why Mary bought what, but John didn't ask me how Susan did.
it is also to be noted that the phonological suppression takes place only at the right edge of the sentence.
neutral
train_99611
Following the work of Dinu et al.
such is not always the case.
neutral
train_99612
In the future, we plan to expand this approach to include more topics, and even apply it to other applications in NLP.
those models have encountered bottlenecks due to knowledge shortage, data sparseness problem, and inability to make generalizations.
neutral
train_99613
These approaches can achieve substantial performance without much human involvement.
a word with a large LLR value is closely associated with the topic.
neutral
train_99614
Such information is usually represented as rules or templates.
the advancement of our system is foreseeable.
neutral
train_99615
The relevant data are repeated here as (35).
irrespective of case-marking, uniformity in our analysis remains intact.
neutral
train_99616
As has been discussed in Section 2, the suffixbased method to identify nominalizations has certain drawbacks.
converted nominalization is not derived through a productive derivational rule that can be easily generalized to other word by the adding of suffixes to word bases.
neutral
train_99617
2In what way, if any, does Chinese Media English differ from British Media English in terms of the quantitative use of nominalizations?
nominalizations are slightly more frequent but not significantly so in social life in CME (t=0.431,p=0.666).
neutral
train_99618
The t-test results in Table 5 confirm that all the five categories in BME use significantly more converted nominalizations than those in CME.
in the present study, we will adopt a syntactic approach and a different methodology to identify and retrieve nominalizations, and extend the scope of previous studies well beyond registers and genres to different English varieties.
neutral
train_99619
The pivot sentence for this task is the Change sentence because it indicates the direction of change of the quantity in the Given sentence in terms of an effective increase or decrease.The last task is to combine the results of the first two stages and generate the corresponding equation.
again, in the case of PPW sentences, we notice that the sentences are not necessarily Part, Part and Whole, but the 'parts' may even be relegated to the same sentence.
neutral
train_99620
For sentence type classification, the baseline is the majority class among sentence types since the sentences are classified independently.
how many dogs does henry have left to walk?
neutral
train_99621
Not only are the performance gains over the baseline vastly substantial, but the performance gains of the solver when compared with stateof-the-art MWP solvers such as WolframAlpha (Barendse, 2012) are also substantial.
the baseline for J-S problems is 36.12% (majority class is Change sentence) and for PPW is 62.47% (majority class is Part sentence).
neutral
train_99622
To indicate the third person experiencer's internal states, their predicate forms must be marked with some morphemes of evidentiality.
in Spanish, internal states are expressed with adjectives (e.g.
neutral
train_99623
2 ] 2 The Japanese language possesses another group of adjectival words, which are called "adjectival nouns" (Martin 1975) or "nominal adjectives" (Kuno 1973 he-TOP glad it.appears.that 'It appears that he is glad.'
the internal state of a person at the speech time, first person or third person, can be expressed as the resulting state of that thought process using the resultative aspect marker te-iru.
neutral
train_99624
(9) (chaň / khaw) dii-cai 5 I / he glad 'I am/He is glad.'
when the sentence (21b) above is embedded in a sentence with the third person subject, it becomes apparent that what soo-da precludes is not the first person, but the conceptualizer, which corresponds to the upper/main clause subject as shown in (22) below (modified from Ohye 1975:202).
neutral
train_99625
The work of Dr. Heisig [1][2] has made great strides in identifying common primitives within Chinese and Japanese characters.
further to this, for complex characters, it is possible that there will be more than six positions of primitives.
neutral
train_99626
Figure 3 shows the box-plot summarizing the vector cosine scores.
as it can be observed, aPant always outperforms the baseline.
neutral
train_99627
The results can be found in Table 5.
they occur with different contexts.
neutral
train_99628
This method can be certainly applied for the study of other semantic relations.
these models are characterized by a major shortcoming.
neutral
train_99629
As a general method of automatic multi-document summarization, we often use the important sentence extraction method which obtains the most proper combination of important sentences in target documents for a summary, avoiding redundancy in the generated summary.
here, sigm(•) is a sigmoid function and is used to decrease mutation rate as generation gets close to g max .
neutral
train_99630
As for document summarization using combinatorial optimization techniques, many studies employ explicit solution techniques such as branch and bound method, dynamic programming, integer linear programming, and so on (Mcdonald, 2007;Yih et al., 2007;Gillick et al., 2008;Takamura et al., 2009;Lin et al., 2010).
there is a problem with them in terms of calculation efficiency.
neutral
train_99631
As expected, RISING → MID and RISING → LOW were also mapped with descending note transitions at a statistically significant level (p<0.01), providing further support for grouping RISING with HIGH.
the results also provide further evidence for the decomposability of contour tones in thai.
neutral
train_99632
As for Thai, three important pioneering studies have revealed that Thai, like most tonal languages, is characterized by parallelism between the transition of lexical tones and the transitions between two adjacent musical notes.
from the phonological perspective, many phonologists, e.g.
neutral
train_99633
(as an answer to "who did ...?"))
this paper focuses on emphasis in Japanese advertisement sentences and defines accent phrases as the prediction unit, while words have been used as the unit for predicting emphasis in the conversation domain (Hovy et.
neutral
train_99634
90% of emphasized accent phrases occurred within 0 to 4 accent phrases from the previous emphasized location.
though later accent phrase locations showed higher likelihood of emphasized accent phrase, the likelihood values do not differ significantly.
neutral
train_99635
Based on results, a seven-stated HMM and 16coponent Gaussian Mixtures with diagonal covariance matrices yields the best accuracy result at 84.15%.
based on discussed works, in tonal languages exploiting tone information to an ASR system had directly contributed to its performances.
neutral
train_99636
Without tone information, paired alphabets are difficult to differentiate their group from the other group.
tone information might not be steady.
neutral
train_99637
For the evaluation, we test our automatically built corpus on the opinion tweet extraction and tweet polarity classification tasks.
our method also produces much larger data since we do not rely on sheer emoticon-containing tweets to collect training data.
neutral
train_99638
In this case, we run experiment using the two different seed corpus construction techniques.
we can express the complete log-likelihood of the parameters, logL c (✓|T ), as follows: The last equation is used in each iteration to check whether or not the parameters have converged.
neutral
train_99639
We introduce three modifications to the original TextTiling algorithm to appropriately apply it to a virtual document composed of a stream of tweets.
for example, a user X posted a tweet as follows.
neutral
train_99640
Our stream data are much shorter than those assumed in the original TextTiling algorithm.
• Two target tweets, t i and t j , adjacent to each other tend to have the same source tweet (s(t i ) = s(t j )).
neutral
train_99641
201 word-based window was 6, and 26.9, obtained when the size of the post-based window was 1.
it was frequently observed that the number of words is less than the window size at the end of the stream when using the word-based window.
neutral
train_99642
take on the value x, the presuppositional content is integrated; in a case like this one, where an antecedent expression exists, it is eliminated from the representation.
1 PACLIC 28 Anaphoric demonstratives, on the other hand, are coreferential with a noun phrase in the preceding discourse and keep track of the referents already introduced to the discourse (and are not present in the discourse situation), as in (1).
neutral
train_99643
Complex subjects modified by a relative in OSV like [[noun-ga-verb]-noun-ga] were eliminated by hand in order to control the data.
the first aim of my study is to investigate the relationship between discourse-old information and OSV word order in Japanese.
neutral
train_99644
This difference must derive from the data difference; her analysis includes VPinternal and VP-external scrambling while the scope of my analysis is only VP-external scrambling.
note that RD can process type (a) and some parts of (b), but cannot deal with type (c).
neutral
train_99645
In her written Japanese data, heaviness accounts for about 70% of the scrambled sentences while referentiality makes up about 25%.
it is conceivable that scrambled direct objects in OSV are both heavy and discourse-old.
neutral
train_99646
In section 4 we examine the sources of disagreement among the annotators and in 5 we summarize the recommendations for reliable annotation.
(2009) raters found more than one valid construction for more than 18% of noun phrases.
neutral
train_99647
The preliminary "confusion" error tag should be broken down into two tags to indicate confusion between definite and indefinite article (CA), and confusion between article and another type of determiner (CD).
we recommend simply not annotating the noun phrase if it impossible to determine the acceptability of the article usage.
neutral
train_99648
There was not much change in precision in the Wiki dataset when can't tell was included as a rare annotation (such as no>0) or a common annotation (such as (no+ct)>0), so we assume that the populations of rare instances gathered are not different between the two.
the Emails dataset 1 consists of 34 positive and 66 negative instances, and simulates a server's contents in which most pairs are negative (common class).
neutral
train_99649
The original classification was calculated as the mean of a pair's judgments.
in the last pair, the turn gives a quote from the article and requests a source, and the edit adds a source to the quoted part of the article, but the source clearly refers to just one part of the quote.
neutral
train_99650
Redundant annotation is usually nonproblematic because individual crowdsource judgments are inconsequentially cheap in a class-balanced dataset.
this technique renders the corpus useless for experiments including token similarity (or ngram similarity, semantic similarity, stopword distribution similarity, keyword similarity, etc) as a feature; a machine learner would be likely to learn the very same features for classification that were used to identify the rare-class instances during corpus construction.
neutral
train_99651
The sources of the corpus were 9 movie scripts and 12 drama episodes.
selecting one right subject among possible candidates can also be regarded as a disambiguation issue.
neutral
train_99652
The values of f-measure for the subject types 'tú', 'yo', 'él', 'ella' were higher than the other subject types.
whether an antecedent exists or not may not be crucial in zero subject resolution.
neutral
train_99653
We proposed 11 linguistically motivated features for ML (Machine Learning).
the 5 low rank features may be regarded as not significant ones to classify subject types.
neutral
train_99654
As mentioned, B is typically much smaller than A: |P B | ⌧ |P A |, and the merged set is at least as large as the old one.
for SMT, incremental training research mainly focuses on updating the alignment probabilities from the parallel data.
neutral
train_99655
This approach allows even faster updates, and in some settings yields comparable results to retraining the model.
source language phrases cannot be translated and placed in the same order in the generated translation in the target language, but phrase movements have to be considered.
neutral
train_99656
Then, a perception test was designed.
the perception study was done in the same laboratory.
neutral
train_99657
For the /θ/ contrast, the spectral energy concentration was analyzed for the characterization of /s/ or /f/ contrast (here, some productions were too short and taken as /t/ tokens).
also, Sichuanese speakers replace English /r/ by /z/ but Cantonese by /w/.
neutral
train_99658
The difference is not significant (see Figure 2).
for the production of /r/, same task was administered and the analysis was made into checking the f3 and waveform of /r/ (ibid.).
neutral
train_99659
#NAME?
we argue that it is this difference that leads to the differences in the occurrence patterns of -le and the English simple past.
neutral
train_99660
This pattern corresponds exactly to the universal learner tendency described by Claim 1 of the Aspect Hypothesis that learners tend to restrict perfective past marking to Achievements and Accomplishments.
the status of -le, -guo, -zai and -zhe as the most important aspect markers is unquestionable (Wang, 1985).
neutral
train_99661
The Aspect Hypothesis as summarized in its simplest form by Andersen (2002: 79) makes the following three claims): 1) [Learners] first use past marking (e.g., English) or perfective marking (Chinese, Spanish, etc.)
5 there is a special kind of Achievements, the so-called [verb+completive morpheme] verb compound, in which -le is quite often rendered unnecessary by the completive morpheme.
neutral
train_99662
We explore the task of automatically classifying documents based on their different readability levels.
generally, most of the frequent words are shorter in length.
neutral
train_99663
313 tured articles from each of the sites and pre-process in similar way as the training corpus.
the training model is used to classify the candidate news articles.
neutral
train_99664
There are two more important aspects we have to consider further.
thai WS consists of 44 letters for consonants, 18 symbols for vowels and 4 tone marks (For more comprehensive descriptions, see Diller, 1996).
neutral
train_99665
Learners can learn these letters after they are familiarized with the 28 letters in the left table of figure 5.
as mentioned before, choice (1) may not be proper because mixing H and L consonants may cause confusion, as syllables have different tones.
neutral
train_99666
In figure 2, vowels (in black) can be represented by a combination of one to three components ( in grey is an initial consonant; in grey is a final consonant).
different processes and strategies are actually involved in L1 and L2 writing systems (L1WS and L2WS) and should not be ignored by the learning material designers or the language teachers (Bassetti, 2006).
neutral
train_99667
WN11 and FB13) sampled from WordNet and Freebase.
distance Model (Bordes et al., 2011), Hadamard Model , Single Layer Model (Socher et al., 2013), Bilinear Model (Sutskever et al., 2009;Jenatton et al., 2012) and Neural Tensor Network (NTN) 15 (Socher et al., 2013)), our model TransM still achieves better performance as shown in Table 8.
neutral
train_99668
Bordes et al (2013b;2013a) conduct experiments on each subset respectively.
bordes et al (2013b;2013a) conduct experiments on each subset respectively.
neutral
train_99669
The size of WN18 dataset is smaller than FB15K, with much fewer relationships but more entities.
the result shows that TransE can only achieve less than 20% accuracy 6 when predicting the entities on the MANY-side, even though it can process ONE-TO-ONE triplets well.
neutral
train_99670
In the manual manner, we manually select the Web pages that describe the labels.
the procedure for learning the DBN with three RBMs is shown in Figure 4.
neutral
train_99671
Now the identifier corresponding to w1H will be H_PRP as मैं is a Hindi personal pronoun.
some examples from this dataset are given below.
neutral
train_99672
• In addition to n-grams of characters, we use frequency of usage of a word in English and in Hindi languages as features for word-level language identification.
∀ word q ∈ Hindi Dictionary For a given word w in the test data, score(w) = log( freq(hw))/M hin_score(hw) = score(w) … we get English score and Hindi score for each word in the Dataset 1.
neutral
train_99673
We use candidate analogies to rank the two candidate antecedents for the target pronoun in a given source sentence.
in the case of multiple dependents, we only select the rightmost one.
neutral
train_99674
Our approach is a pure example-based strategy, which requires no training data.
(2012) examined the use of page counts and found the stability issue.
neutral
train_99675
In the source sen-tence (17), the subject of the adjective clumsy is more likely to be an animate noun (e.g., cat) than an inanimate noun (e.g., glass).
for a computer program, this pronoun resolution becomes extremely difficult, requiring the use of world knowledge and the ability to reason.
neutral
train_99676
The current study cannot provide any elaborate answer to this, but would note that recent studies have found commonalities across categories in English, such as the measurement of predicates in various constructions (Wellwood, Hacquard, & Pancheva, 2012;Champollion, 2010;Krifka, 1998).
in a broader sense, the current analysis shows the benefit of a simpler syntax-semantics mapping mechanism in language.
neutral
train_99677
The current study cannot provide any elaborate answer to this, but would note that recent studies have found commonalities across categories in English, such as the measurement of predicates in various constructions (Wellwood, Hacquard, & Pancheva, 2012;Champollion, 2010;Krifka, 1998).
structure 14shows that gwo3 must not be a head.
neutral
train_99678
For instance, that's true has never been found to answer a previous question in the corpus, while 3% of that's right can perform this function.
it is noted that the top three functions of that's right are exactly those functions analyzed and discussed in the literature, that is, agreement, assessments and affirmative answers.
neutral
train_99679
that's right, that's true, that's correct) and their variations in the Switchboard Dialogue Act Corpus.
in order to see whether their previous contexts could offer useful cues to differentiate the occurrence of that's right and that's true, a specific view is taken into the previous contexts when they act as accept and assessment/appreciation, because the two functions together make up a large proportion of the total occurrence.
neutral
train_99680
They were prohibited to use dictionaries or any other reference books.
we propose an automatic method that statistically measures listenability for EFL learners.
neutral
train_99681
Learner features must show the listening proficiency.
learners have different proficiencies; thus, individual differences of listening proficiency should be considered.
neutral
train_99682
In most of the sentences collected here, there are pitch peaks in the beginning of an utterance, but the last two or three syllables show a gradual decline until the boundary tone.
i am grateful for the financial support of this project provided by the Bilinski Educational Foundation, the University of Hawai'i Arts and Sciences Advisory Council and the Department of Linguistics Endowment Fund.
neutral
train_99683
Sarcasm is known as "the activity of saying or writing the opposite of what you mean, or of speaking in a way intended to make someone else feel stupid or show them that you are angry" (Macmillan, 2007).
if one pair of w 1 and w 2 satisfies our rules among all combination of w 1 and w 2 in multiple sentences in a tweet, we regard the overall tweet as coherent.
neutral
train_99684
A variety of methods have been proposed based on various kinds of techniques, including statistical models, sentiment analysis, pattern recognition, supervised or unsupervised machine learning.
it can be classified as a sarcastic sentence.
neutral
train_99685
As we will describe in 4.4.2, the total scores will also be used as weights in the feature vector in the classification process.
some expanded concepts may be irrelevant with the context of the tweet.
neutral
train_99686
Reddy and Knight (2011) developed a languageindependent model based on a Markov process that finds the rhyme schemes in poetry and the model stanza dependency within a poem.
the candidate scoring the highest PACLIC 28 !
neutral
train_99687
In this study, a fluent, easily sung lyric has been generated from the previous word w l−1 , the mora length m l of the word, and the hidden state C verse i or C line i,j .
(2009) generated melodic lyrics in a phonetic language (in their case, Tamil).
neutral
train_99688
The dataset contains 24,000 songs, 136,703 verses/choruses, 411,583 lines, and 61,118 words.
we have (b), (c) The second and third proposed models is implemented for generating a consistent lyric.
neutral
train_99689
Therefore, given-new ordering is expected to mitigate the processing cost of STOPOV, OACCSV, and OTOPSV.
planned comparison revealed the effect of the morphological factor to be significant in OSV (F1(1, 126) = 12.29, p < .001; F2(1, 22) = 11.58, p < .005) but not in SOV (F1(1, 126) = 0.47, n.s.
neutral
train_99690
money-ACC give-IMP-QT say-PAST 'The robber, pointing a pistol at me, said, "give me money". '
even in given-new condition, the processing cost of the scrambled word order was higher than that of the canonical counterpart.
neutral
train_99691
(9) Hypothesis based on Markedness Principle for Discourse-Rule Violations: SNOMOACCV and STOPOACCV are not penalized when they violate given-new ordering because they are unmarked options.
there are choices between topic marker and case marker: SNOM vs STOP and OACC vs. OTOP.
neutral
train_99692
Some scholars treat yone as a sequence of the two discourse particles yo and ne.
(relevant examples: (18)-(24)) b.
neutral
train_99693
He was talking fast about something with a young man who looked like an assistant, but stopped the conversation when he caught sight of me.
(relevant examples: (9)-(14)) c. In an utterance that does not meet none of the conditions described above, yone must be chosen, or at least is strongly preferred.
neutral
train_99694
), verbs (odůvodnit 'to give reasons', vysvětlit 'to explain', znamenat 'to mean' etc.)
in the data, the individual connective key words (like reason, due to, because of, condition etc.)
neutral
train_99695
We may see that relations of secondary connectives form 5 % of the total number.
from this reason, they were expelled from the Czech Republic these days.'
neutral
train_99696
It is also shown in Table 3 how discourse relations in our method correspond to those in K&B and those in SDRT.
we applied our method to 128 sentences from BCCwJ (Maekawa, 2008).
neutral
train_99697
The temporal requirement Precedence(π2,π4) is that realization of a pain precedes the inference of the strain, which is also as expected.
usages such as that illustrated in (2) and interactions between temporal relations and causal relations were not analyzed.
neutral
train_99698
(2005) characterized causal expressions in Japanese text and built a Japanese corpus with tagged causal relations.
we have to show how our simplified setting is still free from that contradiction.
neutral
train_99699
Finally, we conclude this study in section 5.
here, ROUGE-1 and ROUGE-2 are metrics based on unigram and bigram matching between the referential summary and a generated summary, respectively.
neutral