id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_99900
The story generator selects relevant assertions to form the persona description.
we decided to rely on the F1 score computed on macro-average (Van Asch, 2013) to utilize both precision and recall since F1 score provides the harmonic average between the two mentioned evaluation metrics: Computing the F1 score based on macro-average instead of micro-average (traditional computation) gives equal weight to the classes.
neutral
train_99901
Among the four registers, fiction contains the highest proportion of predicative adjectives where descriptive use of a state of mind or emotion is common (e.g., afraid, aware, glad, happy).
the corpus results are summarized in table 4.
neutral
train_99902
Large corpora are bound to contain noise in one way or another, which may arise from a complete mismatch between the source and the target sentence, an incompatible alignment at the sentence level, or at other times simply very poor translation.
the first translation for each of the example source phrases obviously demonstrates a more contextually dependent rendition.
neutral
train_99903
Our results show that increasing rate of category sharpness differs between musicians and non-musicians.
lastly, duration asserts a stronger effect on between-and within-category discrimination patterns among tonal listeners.
neutral
train_99904
For English musicians, a regression model with an extra quadratic term turned out to be significantly different than the model with only the slope and intercept terms, as suggested by a likelihood ratio test (χ2(1) = 8.75, p = 0.003).
(2014), our results indicate that musical training can in fact promote pitch processing and categorical perception in tone languages.
neutral
train_99905
From the formulae, category boundary sharpness for both English musicians and non-musicians increased as stimulus duration increased, but English musicians showed a steeper slope, indicating a faster increment of sharpness of category boundary with increased duration.
the significant effects of pitch direction were found for most duration values, but only among pairs involving the vowel [i] (FEMI vs. REMI and FENI vs. RENI).
neutral
train_99906
As for minimal duration required to produce and perceive pitch direction, our results showed that the minimum time required to perceive pitch perception was usually shorter than what is needed to produce pitch direction, similar to the findings reported for Chinese and English listeners (Chen et al., 2017), lending further support to the claim that physical constraints affect speech production than perceptual constraints do on speech perception (Janse, 2003).
although category boundary sharpness increased with duration among both English musicians and non-musicians, English musicians showed a faster increment than nonmusicians, indicating that musical experience draws greater benefits from extra stimulus duration.
neutral
train_99907
The study thus hypothesized that if the increase in pitch range and level was caused by hyper-articulation, the speech rate should have been slow down simultaneously.
the tonal contrast t4 and t6 showed the most evident process of differentiation over time, starting with a merging state and ending up with clear separation.
neutral
train_99908
BLEU is the most well-known automatic evaluation technology in assessing the performance of machine translation systems.
this shows that the automatic evaluation by linguistic test points is similar to the automatic translation metrics.
neutral
train_99909
Kalyuga (2005) has proposed several types of structure for Russian emotion verbs as a whole.
first, the large majority of the hàipà instances comprise a nominative experiencer and an accusative stimulus (I) The stimulus can come before the verb by the use of the accusative marker duì/duìyú (II).
neutral
train_99910
We applied the similar approach to tag the TSS to enhance the performance of scientific RC.
we execute the problem setting in computational linguistic domain, but we believe that this setting can provide useful guide to other domains, such as RC in biomedical domain.
neutral
train_99911
We hypothesize that TSS can be utilized to improve the performance of scientific RC.
the entity X tends to be a research activity, such as "analysis", "survey" and "discussion" etc.
neutral
train_99912
Questions in these systems are queried in a single question format, such that there is only one question per utterance.
understanding each part of the text written or spoken by the user is essential to QA systems.
neutral
train_99913
One such instance from Ubuntu dialogue corpus is: why would you recommened archlinux ?
most of these systems suffer in question-answering accuracy, especially when speakers embed multiple questions within the same utterance.
neutral
train_99914
Further, SDS can be performed in two ways, abstractive and extractive where the former method is aimed at producing denser summaries which appear to be written by a human while the latter picks out key sentences, phrases and words from the original document and stitches them together to form a coherent summary.
the word embeddings used are pretrained and out-of-vocabulary words are replaced by "unknown" token while computing a sentence vector.
neutral
train_99915
• β: the most specific concept belonging to B(D, T, I), also called the bottom element.
approaches belonging to the CLR family are based essentially on the idea of considering the query as a new entry for the formal context.
neutral
train_99916
The relation between a document and the information that presents it is a binary relation (the information presents the document or not).
this set of docu-1 http://sourceforge.net/projects/conexp/ Figure 3: there are a one concept contains in its intent the query terms (surrounded by a green circle) ments (noted R d ) to do this, we look for documents that satisfy to the query Q = t 5 ∧ t 6 ∨ t 2 .
neutral
train_99917
There are several approaches for information retrieval based on concept lattice, we cite: information retrieval systems for medical documentation (Cole and Eklund, 1996;Cole and Eklund, 1999), software documentation (Linding, 1995;Priss, 2000) and Bioinformatic Databases (Smal-Tabbone et al., 2005).
it separates the conjunction operators from those of disjunction or negation.
neutral
train_99918
to intensify the surprise emotion.
then we needed to identify whether it expresses emotion(s).
neutral
train_99919
(4) J'étais en train de lire seul(e ).
as for the progressive aspect, both French and English generally mark it lexically (Li and Shirai, 2000;ayoun and Salaberry, 2008), although this can be done in a number of ways.
neutral
train_99920
Bayona (2009) also notes that the factors that most facilitate L3 learning are: typological similarity to previously acquired languages (L1 or L2), proficiency in the L2, and recency (how recently the L2 has been activated).
the result that some learners transferred more from their L1 and more from their L2 is congruent with L3 acquisition studies such as Chin (2009) and Jin (2009) that suggest that L3 learners will transfer from either their L2 or L1, depending on proficiency in the language, recency, and typological similarity.
neutral
train_99921
However, other studies contest that the evidence for the CPH is not as exact as thought and suggest that, for example, the number of years that one studies an L2 is more influential on final acquisition (Strid, 2017;Birdsong and Molis, 2001;Marinova-Todd, 2003;etc.).
the CPH is a controversial theory in SLA, with many researchers claiming that earlier exposure to an L2 (i.e.
neutral
train_99922
Finally, our results are inconclusive as to whether onset age of an L2 or years spent studying the L2 had a stronger overall impact on learners (i.e.
the perfective aspect is marked in the same way in French and English, but differently in Burkinabe languages, but the progressive aspect is more similar between Burkinabe languages and English for present progressive marking.
neutral
train_99923
An alternative explanation could therefore be related to how focus marking interacts with pre-existing prosodic boundaries, or else the tendency for boundaries to be strengthened at specific locations as a result of focus Finally, we found that the classifier did not pattern with respect to focus in the same way across the three acoustic measures.
fixed factors included only fOCUS condition.
neutral
train_99924
The plot for T3 also suggests that third tone sandhi applied between the numeral and classifier, in that there is a dramatic lowering toward the end of noun, but not between the numeral and classifier.
its duration was not affected by focus condition, but it showed a higher mean F0 and greater intensity in the Num-Focus and NP-Focus conditions as compared to Noun-Focus.
neutral
train_99925
Since many recent approaches to computational metaphor processing have used context-based features as input to the classification task (Shutova et al., 2012;Jang et al., 2015;Jang et al., 2016), the degree of success will largely be influenced by which part of the above life cycle the metaphor is in.
for the bLSTM, the input layer of the network consisted of 89 time steps, being the length of the longest sentence in the corpus.
neutral
train_99926
Finally, the attention-based bLSTM with Word2Vec features proved to be the most effective out of all the classifiers, with features from the skip-gram algorithm providing the best results of an F-score of 84.61.
depending on the degree to which the metaphor is conventionalized in the language, it is expected to be far more difficult to detect using computational methods, since its domain essentially becomes indistinguishable from the rest of the context.
neutral
train_99927
We regard it as important that a system communicates with its user via a language based on his/her personality to bridge the inconsistencies between the system's requirements and the user's attention and actions.
these blogs were written by 81 university students in the following four themes: sight-seeing in Kyoto, cellphones, sports or gourmet food.
neutral
train_99928
The behavior and subjectivity tags are attached only within each DE range.
we regarded this extended self as self-behavior.
neutral
train_99929
The coordinate compound is an indispensable part of Chinese.
(2) AB ≈ A + B The second one is AB ≈ A + B.
neutral
train_99930
We observed almost the same average percentage of promises in both the winning and the losing speeches.
we manually transcribed each speech while listening to the speaker for best accuracy.
neutral
train_99931
They were of special interest to the parties as well as to the people.
political discourses hold a great importance in the field of Critical Discourse Analysis.
neutral
train_99932
Table between UniDic and WLSP (Kondo et al., 2018) developed an alignment table between UniDic and the WLSP 2 .
this example is assigned an article number for each SUW of (Nagoya), (tower), (Plaza), and (Hall).
neutral
train_99933
The model generated from the dataset probably estimates approximately 0.75 as the trivia score of many instances.
we use the categories on wikipedia for this process.
neutral
train_99934
As a criterion, we use nDCG@k (k = 5, 10).
they handled trivia Score Maximum A statue of Buddha with the Afro haircut exists.
neutral
train_99935
sentences with high trivia scores for the purpose.
generally, the combination is rare.
neutral
train_99936
Table 4 shows the experimental result of the ranking task.
we need features for the machine learning methods.
neutral
train_99937
The upper bound of the proposed method is shown in Table 3.
the update for the i-th parameter, θ t,i , at time step t, is defined as follows.
neutral
train_99938
Generally, the proposed model tends to be robust for compounds of different character types (e.g., Famiポート (Fami Port) multimedia vending machine), whereas Neubig et al.
b 1 denotes a bias vector, and h t denotes the resulting hidden vector.
neutral
train_99939
Table 5 shows a comparison of four examples for the current study and KyTea 0.4.6.
the proposed method correctly identifies the word.
neutral
train_99940
Aligned Pairs: (Target-side Token, SPM Prediction) Verb Inflection (calls, called), (release, released), (win, won), (condemns, condemned), (rejects, rejected), (warns, warned) Paraphrasing to Shorter Form (rules, agreement), (ends, closed), (keep, continued), (sell, issue), (quake, earthquake), (eu, european) Others (tourists, people), (dead, killed), (dead, died), (administration, bush), (aircraft, planes), (militants, group) Figure 7: The SPM aligns "welcomes" with "welcomed."
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors The SPM predicts the probability distribution over the source vocabulary q j at each time step j.
neutral
train_99941
The attention distribution a t is calculated as in (Bahdanau et al., 2014): where v, W h , W s , and b a are learnable parameters.
because our model is based on their model, we describe their model in greater detail in this subsection.
neutral
train_99942
These summaries are written by human editors.
the development and test pairs were extracted from the data of January 2016 to December 2016, which included 100 pairs of articles and summaries per month.
neutral
train_99943
Then, in the case of the parallel type, the third sentence explains the first sentence; however, its content is different from that of the second sentence.
p gen is used as a soft switch to select a word from the vocabulary distribution P vocab or a word from the attention distribution a t .
neutral
train_99944
First, we need to train structure-specific summarization models for the parallel and sequence types.
based on the above discussion, it is suggested that the first sentence should be correctly generated in order to evaluate the summary structure appropriately.
neutral
train_99945
This, in effect, even counter-balances the differences in the normalised frequencies of 嗰 go3 across the three corpora.
while face-to-face conversations, public discussions, and interpreted speeches can all be considered spontaneous, they differ in at least the following respects: interlocutor relationship, expected audience, speaker freedom, and cognitive processes, which may partially account for the observed differences in the use of demonstratives.
neutral
train_99946
Fear is typically triggered when a person thinks that some bad things are going to happen.
the emotion keyword(s) of each instance is indicated as <emo id=0> X </emo>, with its pre-event and post-event being manually annotated as well.
neutral
train_99947
We used the Adam optimizer to learn the weights of the network with a default learning rate of 0.001.
alignments using graphone bigrams are similar to parallel corpora alignments.
neutral
train_99948
Their system performed with better accuracy in terms of phoneme error rate.
the letter 's' is aligned with a null phoneme '_' that is not pronounced.
neutral
train_99949
Both of them fails to be a felicitous continuation of (8a).
but see discussion in Potts (2005) and 4 Koev (2017).
neutral
train_99950
The subscripts u and m in the translation lines stand for 'unknown to the speaker' and 'mentioned to the speaker previously' 1 respectively.
c. # …daanhai ngo m-zi hai bingo but I not-know be who '…But I don't know who he is.'
neutral
train_99951
(32) Semantics of m-zi and WH (first version) ⟦m-zii / WHi ⟧ g = g(i) ∈ Dcf (33) A special compositional rule for EIs and RIs ⟦m-zii / WHi XP⟧ = ⟦m-zii / WHi ⟧ (⟦XP⟧ F ) For RIs, we suggest that the phonetic features of WH is acquired during derivation.
(49d) appears to be a counterexample.
neutral
train_99952
For the implementation of a GLM model, an initial model was constructed first.
to that end, we used a collection of multi-factorial analysis methods to compare native scholars' L1 corpora, respectively with two varieties of non-Anglophone scholars' nontranslated L2 corpora (L1 English vs. Quasi-L2 English vs. L2 English).
neutral
train_99953
3 It was also widely accepted that translated versions may 'under-represent' linguistic features of their counterparts which lack "obvious equivalents" in original texts (Mauranen, 2007).
the CIs of the three sub-corpora groups did not overlap as well, which means the three groups can be separable according to the different behaviors of each sub-corpus.
neutral
train_99954
In other words, the first syllable of each disyllabic word was rated on a 101point scale, which evaluated the degree of application of tone sandhi as well as the naturalness of tone production.
participants were presented with characters or phonetic symbols on the screen when hearing and pronouncing the words.
neutral
train_99955
Consistent with native speakers, as Beijing speakers obtained 92.23 for real words and 91.42 for wug words with significant difference (p < .001), Cantonese speakers of TG and CG both significantly differed in producing real words and wug words.
the same sets of stimuli were used in both preand post-training recording sessions.
neutral
train_99956
In current study, L1 Cantonese speakers were invited to participate in pre-and post-training recording sessions.
odd trials were naturally produced sandhi stimuli and even trials were the synthesized stimuli without tone sandhi.
neutral
train_99957
Until now, most work on error correction using NMT model aimed at correction for English text.
for example, Web search engines such as Google and Bing typically perform spelling check on queries, in order to retrieve documents better meeting the user's information need.
neutral
train_99958
The rating score increased (towards 2.5 or 3) as the duration lengthened.
the predictions for the production study were that a) t3S syllables with low vowels should be produced more t3-like, b) t3S syllables with longer duration should be produced more t3-like, and c) a null hypothesis was assumed for syllables with different rimes.
neutral
train_99959
ProsodyPro generated the intrinsic duration of each syllables and turned all the syllable durations into a normalized time.
duanmu (2007) maintained the view that all Mandarin stressed syllables (i.e., syllables carrying lexical tones) occupy two rime slots, that is, although a simple rime, V in a stressed CV is counted as occupying two rime slots, and in this view, the length of V in a stressed CV syllable is considered the same as the length of VG and VN in CVG and CVN syllables.
neutral
train_99960
This result proves that the information in the exposition is vital for choosing a correct option.
climax: She exited the airport and was struck by the heat.
neutral
train_99961
To discuss how these matching features influence the model, we design a series of experiments and give further analysis on the learning curves of the test set.
this lacks an overall representation about the distilled exposition itself.
neutral
train_99962
where the Japanese functional expression is in bold will be expected to be corrected as "行きましょう。", because the correct usage of Japanese verb conjugation rules in this phrase depends on the Japa-nese functional expression "ましょう (Let's)".
・BCCWJ 4 : The Balanced Corpus of Contemporary Written Japanese (BCCWJ) is a corpus created for comprehending the breadth of contemporary written Japanese.
neutral
train_99963
・Lang-8 Learner Corpora: this is a largescale error-annotated learner corpora, covering 80 languages.
we proposed a character-based neural sequenceto-sequence model for the task of correcting grammatical errors on Japanese functional expressions.
neutral
train_99964
This paper studies Heavy NP Shift (HNPS) from the perspective of parsing using Minimalist Grammar.
in the example, the parser starts building the derivation tree from a CP (step 1-2); "un-"merges the CP into C and T P with an EPP movement landing site (step 2-5).
neutral
train_99965
Given the processing predictions, complexity metrics favor rightward movement analysis over the rest.
since structural properties of a sentence predicts how hard it is for humans to process it, it is unclear what processing predictions these analyses make, nor is it clear whether these predictions are born out in observed human processing preferences.
neutral
train_99966
In the phonological system of Hong Kong Cantonese, there are seven long vowels /a, i, u, y, ɛ, ɔ, oe/ (Shi et al., 2015).
flege and Liu (2001) also revealed the important roles that motivation and L2 input played beyond the critical period impact in the process of nonnative language learning.
neutral
train_99967
Character and subject intercept as well as subject slope for vowel type were included as random effects.
here arise the two questions: Why was significance only revealed for AOA but not for AOL?
neutral
train_99968
Regarding the relationship of accent ratings and learning factors, the amount of Cantonese use is proven to be the most determinant indictor for learners' perceived accent, followed by AOA, LOR and Urdu use.
the accent rating result for the new vowel /ae/ did not clearly support the model, and the acoustic results for new vowel demonstrated that the experienced learners could articulate /ae/ in the same way as English natives did, unlike the inexperienced learners.
neutral
train_99969
Table 1 presents the global polarity distribution for each language in the MDSU corpus.
in the future, we plan to attempt non-linear transformation methods and more task-oriented deep networks.
neutral
train_99970
This is also the reason why the most typically used methods of MSA have been based on the above-mentioned MT.
table 2 shows the cosine distance decrease situation in the test sets for the two kinds of transformation methods.
neutral
train_99971
(2013) proved the effectiveness of employing gender information, but their classifiers are not designed for multilingual settings.
they created separate models for each language, whereas we developed a single parametersharing model for all languages.
neutral
train_99972
This study calculated the average consistency (AveCon) for all the 406 Chinese character (CC) families.
average consistency is also strongly affected by each family's powerful syllable and its K-C correspondent rate (r = 0.980, p < 0.001).
neutral
train_99973
Adequency 4 The meanings of the two sentences are the same.
higher marks indicate better output) based on the criteria shown in Table 3.
neutral
train_99974
In Figure 1, the complex word " (rich)" is substituted by " (much)".
"LexSub" obtains only three candidates at the maximum for one complex word from the paraphrasing dictionary described in 4.1.
neutral
train_99975
In Japanese, substituting words surrounding a complex word together with the complex word is sometimes necessary.
this model predicts the previous token t k considering the future sequence (t k+1 , ..., t N ).
neutral
train_99976
(2015) proposed a model called distillation and were able to train a model which was more compact than the original one.
first, this model takes two sequences: words and POS tags.
neutral
train_99977
In this paper, we trained the system on a training set containing word problems with single operations and showed that memory network based architecture was able to learn to generate such equations.
system could not identify the direction of transfer for verb 'borrow' which resulted in an erroneous prediction of order of equation components.
neutral
train_99978
GOCE applies the appropriate content extraction algorithm based on the classified genre of the page.
otherwise, we extract the whole text of the parent node.
neutral
train_99979
Our evaluation showed that the coherence features result in higher accuracy than that of state-of-theart methods on different granularity of German text.
computer-translated text detection task has been interested by numerous researchers.
neutral
train_99980
The scarcity of large corpora in reading disambiguated words is a major limitation in linguistic analysis and the initiation of a statistical approach to word reading disambiguation.
the extreme right column of table 3 shows the adoption rates of the sentences obtained by the simulated conventional dataset construction procedure with BCCWJ, and the average of the adoption rates was calculated.
neutral
train_99981
At the moment, it is considered that easier tasks might be appropriate for the use of crowdsourcing than writing pronunciation tagged sentences.
although longer expressions such as " (person who is cooking)" and " (people who have knowledge)" mean the same as the shorter expressions of " " and " " , they may not be as commonly found in natural documents as compared to the shorter expressions.
neutral
train_99982
Homographs are words that share the same written form as other words but have a different meaning, and can be classified into either homonyms or heteronyms, depending on their pronunciation, e.g., homonyms have the same pronunciation, such as lie (untruth) [lÁi] and lie (to recline) [lÁi], whereas heteronyms have different pronunciation, such as desert (region) [déz@rt] and desert (to leave) [diz'@:rt].
statistical tests showed that the difference in the average correct rate was statistically significant (p<0.01).
neutral
train_99983
The corpora are used to construct models for disambiguation and the disambiguation accuracy is strongly affected by the size of the corpora.
four sets were used as training data and the remaining set was used as test data for 5-fold cross validation.
neutral
train_99984
In the WordNet, each word belongs to a synonym set, "synset," which includes several synonyms.
limits are anticipated for the number and varieties of sentences that one person can write.
neutral
train_99985
We extracted 82,892 instances whose format is the same as that in Figure 3.
(1) Bag-of-Words (BOW) for morphemes in an argument and its head verb.
neutral
train_99986
The instances of GDA were divided into development (85%), training (5%), and test (10%) data.
this is because large-scale corpora annotated with these three labels have been developed (Kyoto Corpus (Kawahara et al., 2002) and NAISt text Corpus (Iida et al., 2007)) and widely used 1 .
neutral
train_99987
Because of the flexibility of the neural-network structures, transfer learning can be easily implemented by changing the network structure.
we used simple neuralnetwork architectures to focus on clarifying what input features are effective, which learning models are powerful, and how different labeled corpora can be effective for the target label set.
neutral
train_99988
Each subtask has three datasets: train, dev, and test.
we used the GloVe model of 300 dimensions.
neutral
train_99989
For the classification subtasks EI-oc and V-oc, we use the additional metric Pearson correlation calculated only for some emotion like low emotion, moderate emotion, or high emotion.
each expert, in itself, can be a separate Regression/Classification model like Multi-level Perceptron (MLP) model or Long Short-Term Memory (LSTM) model or any other model that best suits the data and task at hand.
neutral
train_99990
Intuitively speaking, the TPS of a given dependency type in a given corpus indicates the average occurrence of that dependency type in a sentence in the same corpus.
the raw texts without tags (downloaded collectively as a data-only file from the website of ANC: http://www.anc.org/MASC/Download.html) were parsed through the Stanford Lexicalized Parser v.3.7.1 after some parts of the texts (titles, headers, dates, unconventional punctuations, etc.)
neutral
train_99991
Finally, we evaluated the F1 score of different labels individually to identify which type of entity is more difficult for the model to identify.
final representation of a word is obtained by concatenating the left context − → h t and right context We use 150dimensional LSTM in forward and backward direction so that, a 300-dimensional vector representation is learned for each word.
neutral
train_99992
If the word sequence is provided in the reverse direction as well to a CRF model, identifying that the word Medical is part of an entity type Organization (I-ORG) will help the model to realize that the ambiguous word Jones is also part of Organization entity (B-ORG).
weights c are the parameters to be learned.
neutral
train_99993
Problem Statement: We are given a general purpose dependency parser PAR (e.g., Stanford Parser), a large-scale corpus CORP, and a contend word W. Our goal is to induce a set of grammar patterns, p , ..., p k , for W present in CORP using PAR.
learners can discover the patterns of a word from the use of language by themselves (cf.
neutral
train_99994
Typical linguistic search engines such as COCA and COCT usually fashion a bottom-up approach by providing a wealth of examples for inductive learning.
(Hunston, 2000) for English pattern grammar) of a Chinese word, corresponding examples, and the frequency, helping CSL learners to use a word authentically in writing.
neutral
train_99995
By simplifying PoS, we reduce the granularity of patterns and thus make the use of patterns easier to understand.
this approach is not optimal because it encourages a more bottom-up rather than top-down display of concordance lines and the users can only examine one example after another without a structure to follow, making discovery of patterns bottomup a slow and ineffective learning process.
neutral
train_99996
Table 4 presents the numbers of Type 1 and Type 2 spoonerisms in development and testing sets.
we have also shown the effectiveness of using Vietnamese language model and dictionary for detecting spoonerism in Vietnamese sentences.
neutral
train_99997
GREET refers to an animate object, usually a person, which is being greeted in a chat.
lSTM architecture is used as the main layer.
neutral
train_99998
This syntactic information plays a pivotal role in solving SRL problem (Punyakanok et al., 2008) as it addresses SLR's long distance dependency.
semantic Role Labeling (sRL) has been extensively studied, mostly for understanding English formal language.
neutral
train_99999
The attention mechanism firstly collects the context information by multiplying trainable weights with all the vectors from every time step of the last LSTM output.
by only using WE + POS as the features, combined with DBLSTM, the model can achieve compelling result.
neutral