id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_93100
However, the canonical word order in Tunisian verbal sentences is SVO (Subject-Verb-Object) (Baccouche, 2003).
the word order is generally reversed when passing to tD.
neutral
train_93101
For example, for the root ‫رج‬ ‫:خ‬ xrj, we can apply different patterns which give different lemmas with different meanings: ‫َج‬ ‫ر‬ ْ ‫خ‬ َ ‫أ‬ / to eject 3-Lemma: The lemma is a fundamental concept in the processing of texts in at least some languages.
we can distinguish in the same speech, MSA words, TD words and MSA-TD words such as a word with an MSA component (root) and dialectal affixes.
neutral
train_93102
Nowadays in tunisia, the arabic Tunisian Dialect (TD) has become progressively used in interviews, news and debate programs instead of Modern Standard Arabic (MSA).
the canonical word order in Tunisian verbal sentences is SVO (Subject-Verb-Object) (Baccouche, 2003).
neutral
train_93103
We label the features as simple, progressive and perfect.
it is hence important to produce the right inflections and auxiliary verbs.
neutral
train_93104
(2012) use small datasets with fewer than 100K sentence pairs.
we also observe similar trend for several source phrases in both Arabic-English and Chinese-English corpora.
neutral
train_93105
• Faster decoding: The compact grammars naturally result in faster decoding and we observed up to 20-30% speedup in the translation including the time spent for loading the model.
we use the Q-Q plot to study the behaviour of two probability distributions as explained below considering the Chinese phrase 联 合国 (united nations) as a representative example.
neutral
train_93106
This is because the additional corpus expand the diversity of the topic model, especially for NIST08 which contains a large part of web data, generating more accurate topic distribution.
we integrate our LM into SMT system to utilize topic distribution of the test-document as a trigger to each topic-specific language model.
neutral
train_93107
We first estimate the topic distribution for each document in training data, and assign those topic probabilities to each sentence, then, we train a topic-specific n-gram LM for each topic based on those topic probabilities.
the "bag-of-words" assumption is an unrealistic oversimplification in language model case because it ignores the order of words which is critical in estimating n-gram probabilities.
neutral
train_93108
Reserve a random portion R of L, and the remaining set L = L − R is used for training.
figure 2 shows the performance of supervised polarity classifiers for 20 topics based on different imbalanced classification methods.
neutral
train_93109
Illustrated in Figure 1, it can be observed that all the classes of training corpus are biased.
finally, section 6 summarizes the work, draws some conclusions, and suggests related future work.
neutral
train_93110
Eight sentiment resources (see Table 1) were standardized to generate eight sentiment diction-aries (D j , j=1,…,d).
implications for utilizing multiple dictionaries are discussed.
neutral
train_93111
Finally we include our proposed Balanced Proximity-based Model, denoted as BPM, which formulates semantic proximity and positional proximity into a unified language smoothing framework, with flexible intra-document smoothing and inter-document smoothing.
pLM mainly utilizes positional information while no semantic association is considered.
neutral
train_93112
p (w|i, d) is the language model at position i.
to control the impact of out-ofdocument vocabulary, we add a parameter µ ∈ [0, +∞) here: Analogously, for the positional-based smoothing, the smoothed count by positional proximity is We apply the best positional proximity based density function of Gaussian projection ψ(i, j) in (Lv and Zhai, 2009).
neutral
train_93113
In each experiment, we first use the baseline model (KL-divergence) to retrieve 2,000 documents for each query, and then use the smoothing methods (or a baseline method) to re-rank them.
to the simple strategy which smoothes all documents with the same background, recently corpus structures have been exploited for more accurate smoothing.
neutral
train_93114
So, the general probability can be simplified to: where h k is a lexical word, h k+1 is the head word of h k , and o k ∈ {⟨L⟩, ⟨R⟩} is retained in the history to show the dependency orientation.
we further propose an expectation-maximization (EM) algorithm for estimating the probability of arbitrary order 3 dependency N-grams, by considering all possible dependency structures 4 of a sentence (Fig.
neutral
train_93115
The convergence trend along with the iteration times can be observed.
we collect from the raw corpora all possible lexical dependency Ngrams 15 without any cut-off thresholds for models of every order.
neutral
train_93116
, n}) can be done by taking the gradients of α i with respect to the training objective and directly plugging it into a gradientbased optimization framework, such as the limited memory variable metric (LMVM) used by Smith et al.
unlike its more popular sibling 1 2 norm (used in group lasso), which seeks feature sparsity at the group-level, 2 1 norm encourages sparsity within feature groups.
neutral
train_93117
The latter becomes an issue where input texts are not fully known or come from different sources as in the web.
one way to evaluate our approach is to compare it to each pipeline In practice, relying on such a fixed pipeline involves the danger of choosing a slow one.
neutral
train_93118
Online adaptation always did better than the random baseline but not than the optimal baseline except for training size 1.
we employed the UIMA tokenizer 2 to generate tokens and sentences, and the TreeTagger for part-of-speech tagging and chunking (Schmid, 1995).
neutral
train_93119
van Noord (2009) trades parsing efficiency for parsing effectiveness by learning a heuristic filtering of useful parses.
due to the algorithms' input constraints, however, k is normally much lower in practice.
neutral
train_93120
Supervised methods recast keyphrase extraction as a binary classification task (Witten et al., 1999), whereas unsupervised methods apply different kinds of techniques such as language modeling (Tomokiyo and Hurst, 2003), clustering (Liu et al., 2009) or graph-based ranking (Mihalcea and Tarau, 2004).
the completeness of the graph has the benefit of providing a more exhaustive view of the relations between topics.
neutral
train_93121
The conditional random fields CRFs (Lafferty et al., 2001) have shown empirical successes in label sequence labelling problem.
the example shown in Figure 2 clearly depicts the way features are mapped from a tree structure intent representation of an NL query.
neutral
train_93122
Some cases of cyberbullying lead the students who were bullied to assault or kill themselves or the student who wrote the bullying entry on the BBS.
the paper outline is as follows.
neutral
train_93123
We used the terms shown in Table 11 to detect such expressions.
as shown, there are many cases where the analyzer should not propagate due to scope, and there are also many cases where the analyzer should propagate as −.
neutral
train_93124
Coverage for the dictionary of Japanese functional expressions also becomes a problem.
for instance, PR is interpreted by probable and PS is interpreted by possible.
neutral
train_93125
In the same way, an event "kaet" (go home) in (1b) is labeled with PR+ and "hassei-suru" (occurrence) in (1c) is labeled with CT−.
in our experiment, we focus on the propagation, so we do not use gold contextual factuality.
neutral
train_93126
To avoid the randomness in data, we adopt five-fold cross-validation on it.
the other two baseline approaches are listed as follows: Lin's similarity (Lin, 1998) (LIN98).
neutral
train_93127
This is because it is difficult to assign an absolute score of semantic similarity to a pair of words, especially when seasoned linguists are not available.
similar to ds P M I 3wd (•, •) based FUsE, performance of LIN98 and JCs drops while shrinking the number of relation instances.
neutral
train_93128
In future studies, we will experiment on more elaborated combination similarity fusion mechanisms other that linear combination.
we introduce a semantics layer upon the distribution layer to exploit semantic relations.
neutral
train_93129
Such phenomena usually flip the entailment relation.
we adapt the original algorithm from two aspects.
neutral
train_93130
Because break points are the boundaries of words, we first collect the known words in the corpus, and take their boundary points as the positive samples.
a total of 56 features, in the highest frequency word dataset with 6gram learning samples, were calculated and sorted by the f-score algorithm proposed by Chen and Lin's SVM feature-selected research (Y.-W. Chen & Lin, 2006).
neutral
train_93131
7 For each percentage, the result of random selection are the average of three runs of random selection.
we would also like to evaluate our approach on other NLP tasks, and test its performance with other machine learning algorithms.
neutral
train_93132
(e.g., 10%, 20%) of source-domain data is selected, the OOV rate of the test data is much lower when Cov is used.
when feature augmentation on CWS has higher improvement, usually it also brings higher improvement on POS tagging when comparing across different test data (e.g., the improvement on BC is higher than NW for CWS, and the same is true for POS tagging).
neutral
train_93133
We thus needed evaluation measures to decide about the quality of a cluster analysis.
mass across the classes of which it is a member.
neutral
train_93134
As an auxiliary analysis we also extract the converged scores of prosody nodes and rank them in order to analyze their effectiveness.
a second variant of this baseline selects the utterances that appear in the beginning of the document.
neutral
train_93135
The rest of the paper is organized as follows: we discuss the sentiment analysis of healthrelated online messages, then we introduce our data; next we discuss the Subjectivity Lexicon and the features we use to represent the data, the analysis of the manual annotation and the machine learning classification results, before we conclude the presentation.
out of these 21, only 10 words were found unique with their part of speech.
neutral
train_93136
Users express their sentiments differently on forums compared to the way they express opinions when providing reviews or sharing messages on social networks.
the performance has increased by 4.2% on average among the three classifiers.
neutral
train_93137
Using H(n), the probability distribution of the network can be represented as P(n) = exp{−H(n)}/Z, where Z is a normalization factor.
fixed seed words are used with different β values.
neutral
train_93138
Finally, the test set contains 63 songs written by five non-suicide artists and 46 songs written by 4 suicide artists.
in our search for lyricists who committed suicide, we looked for lyricists who met the following prerequisite: the suicide had to be relatively unambiguous.
neutral
train_93139
(2011) and Stirman & Pennebaker (2001), we expected to find differences in the use of the passive constuction and in the proportions of the first-person pronouns to the rest of the pronouns.
even if we figured out a way to split the corpus into sets of the appropriate numbers of songs while taking into consideration that the artists for each set must be unique, there are further factors that could skew the results.
neutral
train_93140
The specific morpho-syntactic behaviour of the kan-affixed verb is very much determined by the type of stem it attaches to, and its resulting behaviour varies from stem type to stem type (Kroeger, 2007;Vamarasi, 1999;Arka, 1993).
we use pairwise precision (pP ), recall (pR), and F-score (pF 1 ) to evaluate our generated clusters, relative to the gold-standard word classes, as described by Schulte im walde (2006).
neutral
train_93141
For our case study, we aim to discover groups of like stems that, when used predicatively in the same morphological context, give rise to the same syntactic behaviour.
lapata and Brew (2004) develop a semi-supervised system that generates, for a given verb and its syntactic frame, a probability distribution over the levin verb classes.
neutral
train_93142
The TYPES-HDP system, on the other hand, barely exceeds the Majority Class baseline with the ON-ALL experiment, and fails to do so with the ON-VERBS experiment.
we aim to induce classes of stems that exhibit the same syntactico-semantic behaviour when they have the same morphological marking.
neutral
train_93143
When the number of instances is large enough, the statistical model will effectively incorporate these entity type constraints as long as entity types are extracted as features.
we can consider further efforts to enlarge and balance the initial set from the view of non-relation approximation.
neutral
train_93144
To use multiple parse trees for a single path, we also developed a linear classification model whose output score approximates the difference between the log probabilities of the path being derived from positive and negative relations.
first, the last tokens of sequences, namely arguments, are dropped, because of the observation that this makes it easy to convert the components of patterns into sequences and their subsequences in a systematic way.
neutral
train_93145
This work was supported by the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science and Technology (MEST) (No.
the independence constraint may not capture the nature of dependency paths, but makes it cheaper to learn lexical rules.
neutral
train_93146
They find that the use of such generalized sequences improves the performance of the task of identifying opinions from product reviews.
we find that the use of PCFGs learned by our pseudo-count method improves the performance of classifiers in a statistically significant manner, compared to a baseline classifier with ngrams encoding partial structures of paths.
neutral
train_93147
One possible explanation is that the wider the beam is the more erroneous parse trees are likely to affect the final decision of classifiers.
likewise, the biological event extraction research, a branch of information extraction, stresses the importance of the role of dependency paths in identifying event-argument relations due to the resemblance of eventargument relations to dependency relations (Björne et al., 2008).
neutral
train_93148
For this reason, the lower bound ( ) of ( ) is: Instead of ( ), we use ( ) at risk of the deterioration of the performance of the resulting model, since it is apparently easier to handle than ( ).
the core component alone can be used to detect the THEMEs of positive regulation events (e.g., "IFN-induced IP-10"), but the subordinate component alone cannot.
neutral
train_93149
The edge weights are determined by various factors such as semantic similarity or syntactic similarity between nodes.
for each phrase p, we extract three vectors !
neutral
train_93150
"in a nutshell " "on the whole" "generally speaking" "in general" "in brief" "broadly speaking" CP q Once the factors are selected, we have to determine the weights of the factors, (i.e., !
we train the weights of factors such that the performance is optimal for a given developing data set.
neutral
train_93151
For each intervention appearing in a sentence, we identify all the terms that are connected to it via specific dependency chains using the following rule: 1.
for the preliminary annotation and analysis, we used the same data as the task-oriented coverage analysis work described in (Sarker et al., 2012).
neutral
train_93152
This is because, IC only relies on the cosine similarity between textual features of tweets to form clusters, while IC-Time enforces a time constraint in the similarity measure to reflect the time locality of events, which thus leads to better clustering accuracy.
for this purpose, we design an efficient single-pass clustering algorithm which clusters the stream of tweets in an incremental manner.
neutral
train_93153
(1) Remove leaf nodes which are not link nodes; (2) If the node is the only child of of its parent node, delete the parent node and directly connect the node with its ancestor node; (3) If the node has two children nodes, and the first child node is the link node, while the other is not, then delete the node and connect the two children nodes with its ancestor node.
classification and clustering: Expected cross Entropy is selected for feature selection.
neutral
train_93154
In addition, as the vertex number increases to a value, the running time of searching for the maximum subgraphs becomes unacceptable.
libxml2 is used to parse the web pages into the DOM trees.
neutral
train_93155
We observed that our Ranking algorithm Rel-Div performs at par with (and sometimes even better than) M-Div and M-Div-NI.
a part of the graph for the query "sun" is depicted in Figure 1 Sun light In order to build a learning model for b q , it is important to define a good set of features that characterize the node's relevance to the query.
neutral
train_93156
• Empirical study: We report on our investigation of applying supervised learning algorithms and leveraging different feature representations, whose results will be used as a foundation for a larger case-based reasoning approach.
for example, if a user types "!flash", then the bot would output "To install flash see https://help.ubuntu....mats/flash -See also !Restricted and !Gnash".
neutral
train_93157
We describe an approach to automatically distinguish bot-answerable questions, which would mitigate this problem.
f 0.5 is more appropriate here than the standard f score because it places more emphasis on precision.
neutral
train_93158
The differences between the correlations are statistically significant at p < 0.05 (using Student's t-test) except for the differences between the LSA+SYN system and the baseline, and between the LSA+SYN+SEM system and the LSA+SEM system.
the length of the essays is another issue since longer essays tend to capture more information in their representative vectors which provides the scope for a better similarity matching with the semantic space.
neutral
train_93159
The line CH in Figure 1 shows the definition of chunks.
table 7 shows the error changes when different features are added.
neutral
train_93160
Table 4 shows the Chinese SRL results after adding the N-best dependency parsing related features.
we do not explain "standard" features, however, we give a detailed description of the features used in this work.
neutral
train_93161
Their method relies on the synset-category mappings of (Ponzetto and Navigli, 2009), extending it with information obtained from the hyperlink structure of the Wikipedia articles.
there have been many attempts to extend Word-Net with concepts from Wikipedia.
neutral
train_93162
Based on our linguistic analysis, we formalize a set of temporal elements that are the blocks used to construct time entities, such as century, year, month, day, hour etc.
tempEval-2 corpus includes training and test parts, as shown in table 2 We analyse the annotation scheme based on training data and then add some additional rules on durations, such as shinian (ten years), shi-tian (ten days), and some approximate expressions, e.g.
neutral
train_93163
the min-guo period established in 1912 after Qing dynasty.
we use two different corpora: Sinica (Chen et al., 1996) and TempEval-2 from SemEval-2010 competition (Pustejovsky and Verhagen., 2009).
neutral
train_93164
The notation * denotes that except the term "concern" there are other terms that occur only one time among 6 ranking models, which are listed as follows: breach, profit, violat, regain, uncomplet, accid, abl, integr, doubt, grantor; similarly, for the notation ∧, the terms are: incorrectli, fault, nondisclosur, misus, breakag, defalc, excit, unclear, sentenc, overdu, omit, inforc, irrevoc, unencumb, further, variant, precipit, libel, loss.
in the following discussions, we conduct some analyses on the words learned from the ranking models.
neutral
train_93165
But these (Pradhan et al., 2012;Ng, 2010; are mainy in non-Indian languages.
we train CRF with the following set of features.
neutral
train_93166
For training and development datasets, anaphoric annotations were provided by the organizers.
the classifier has to decide, given the features, whether the anaphor and the candidate antecedent are coreferent or not.
neutral
train_93167
(1997), , Bejan and Harabagiu (2010), Chen et al.
here, we show one ordering in which the parent(s) of a component are considered after the component itself.
neutral
train_93168
For that reason, a number of recent research papers have focused on these questions via analyzing the inner workings of entity coreference resolvers (e.g., Stoyanov et al.
no attempts have been made to address these questions in the context of event coreference.
neutral
train_93169
Indeed, previous studies indicate that over 70% of all queries contain entities (Guo et al., 2009;Yin and Shah, 2010).
the confusion matrix for NERQ-2S shows that errors, basically, regard highly ambiguous terms.
neutral
train_93170
Consequently, Cabocha, which takes words/characters and their POS tags as features for discriminative training using a SVM model (Kudo and Matsumoto, 2002b), can still correctly tend to include these single-Kanji-character words into one chunk.
abbreviations, food names and event names are formed and shared in Twitter and Facebook.
neutral
train_93171
In order to increase the chance that the first retrieved web pages contain the possible answers, we apply a novel query expansion approach using answer patterns.
our base question answering module uses the given question phrase as a search engine query.
neutral
train_93172
One keyword missing may lead the change of the tweet's classification result.
constructing such a classifier is a challenging task, as tweets are short and informal.
neutral
train_93173
Reflecting the rapid growth in the use of opinionated texts on the Web, such as customer reviews, opinion mining has been explored to facilitate utilizing opinions mainly for improving products and decision-making purposes.
because this difference is not important in our research, we usually use the term "evaluation".
neutral
train_93174
However, even a list of 1,000 cognates is a hard constraint for some language pairs.
in a follow-up experiment, we use the full size of each training set.
neutral
train_93175
Evaluation Metrics In order to estimate the cognate production quality without having to rely on repeated human judgment, we evaluate COP against a list of known cognates.
in the remainder of the paper, we refer to our method as COP (COgnate Production).
neutral
train_93176
Especially the production of Farsi cognates works very well, although the training data has not been filtered.
an alternative approach is to manually extract or learn production rules that reflect the regularities (Gomes and Pereira Lopes, 2011;Schulz et al., 2004).
neutral
train_93177
In order to filter the resulting list of words, we transliterate Russian and Greek into the Latin alphabet 11 and apply a string similarity filter.
a student is more likely to understand a word if there is a similar word in a language she already knows (Ringbom, 1992).
neutral
train_93178
The classification accuracy on f Lex ,f Lex + f Syn and f Lex + f Syn + f Sem features have been increased by 1.36%, 2.74% and 0.69% respectively.
first, that is the only standard taxonomy that exists in Bengali QC so far.
neutral
train_93179
Depending on the overall reject rate, our system could get a significant increase in the complete recognition time while the returned accuracy of our system is promising compared to that of the single SVM classifier.
in other experiments using less training data as presented in table 4, we trained the SVM classifier based on the combination of DiffPosNeg and BoW features.
neutral
train_93180
Although NBs are very fast classifiers requiring a small amount training data, there is a loss of accuracy due to the NBs' conditional independence assumption.
remaining documents, that are detected as "hard to be correctly classified" by the NB classifier in the use of rejection decision, are forwarded to process in a SVM classifier at the second stage, where the hard documents are represented by additional bag-of-words and topic-based features.
neutral
train_93181
This improvement is slight as the amount of clicked Wikipedia links is small with respect to the whole collection.
answers, if the search query is close to a Qa-system like question.
neutral
train_93182
In order to fulfill the missing relationships at phrasal category level, a mapping work of phrasal tags for Chinese (Zh), English (En), French (Fr), German (De), and Portuguese (Pt) is presented.
in addition, glue rules are added in allowing combinations of partial translation fragments monotonically.
neutral
train_93183
In particular, we observed that the parsing accuracy (either on the original or universal tag-set) for Chinese language is lower compared with other languages, which possibly led to poorer alignment relationships at tree level.
it consists of twelve different tags, including: NOUN (noun), VERB (verb), ADJ (adjective), ADV (adverb), PRON (pronoun), DET (determiner and article), ADP (preposition and postposition), NUM (numeral), CONJ (conjunction), PRT (particle), "."
neutral
train_93184
We used a 5-gram language model for all the languages based on the SRILM toolkit (Stolcke, 2002).
syntactic information is being integrated either on the source or target or both side(s) in training translation models for handling the translation task.
neutral
train_93185
Among all, PR+tfidf achieves the best performance.
in each iteration, a word's interestingness score is the linear combination of its interest preference score and the sum of the propagation of its inbound words' previous PageRank scores.
neutral
train_93186
A different semantic of one word has a different DEF description.
based on these, we will optimize parameters for various applications.
neutral
train_93187
→ The court deemed it necessary that she respond to the summons.
preposition at has been replaced by in before a place like at central London.
neutral
train_93188
From these two rules, the grammar extractor wont be able to derive hate → hates.
grammar correction can be considered a translation problem from incorrect text to correct text.
neutral
train_93189
The results show that, as expected, the performance of the parser over this corpus decreases, making even more difficult the extraction of SCFs.
the treebanks used in this work contain up to 25 different dependency relations.
neutral
train_93190
A particularity of this approach is that, to enable the comparison of context vectors, it requires the existence of a seed bilingual dictionary to translate source context vectors.
in the financial domain, translating action into deed or lawsuit would probably introduce noise in context vectors.
neutral
train_93191
We discuss the learning for the POMDP that uses our IDG.
we rewrite the distributions in Eq.
neutral
train_93192
Then we keep these words as the nodes in bilingual derivation trees, and continue to generate parent nodes by combining these child nodes bottomup.
the sizes of these corpora are listed in table 1.
neutral
train_93193
In the baseline system, translation model (LDC) was trained on the LDC corpora that had been cleaned and thought to be less noisy.
we cannot always get the translation performance improved by simply enlarging our training data.
neutral
train_93194
We propose a per field EM formulation for finding the importance of the expansion terms, in line with traditional PRF.
hence, the T itle field has a larger impact as compared to the Body field.
neutral
train_93195
One complication is that the question text may have more than one sentence with a question mark after it-in fact, each thread contains 2.2 sentences ending with question marks, on average.
we are interested in learning a reranking model that is generally applicable to question answering systems.
neutral
train_93196
For example, the future prefix ‫,سـ(‬ s, "I will") is transformed in TD to ( ‫,با‬ bA$, "I will").
we calculate the number of words for which the analyzer attributes at least one correct analysis.
neutral
train_93197
Thus, on the P&N model, the average conditional entropy per feature given the class (how surprising the feature is when we know the answer) increases by 8.8% when the oracle is unavailable.
in all of the feature sets we see a marked drop moving from micro-average (average over instances) to macroaverage (average over connective types)-P&N, for instance, goes from 93.0% to 85.3%.
neutral
train_93198
These errors percolate leading to erroneous text-level discourse processing.
on the P&N model, the average conditional entropy per feature given the class (how surprising the feature is when we know the answer) increases by 8.8% when the oracle is unavailable.
neutral
train_93199
Rare terms that occur only in δ documents or fewer are eliminated by replacing them with the average vectors of the documents that contain them.
assuming amortized constant time for set testing and insertion (Sedgewick, 2002), that the update is smaller than the corpus, i.e., U < D , and that the number of documents is greater than the number of terms, i.e., n > m, the update algorithm can be executed with a complexity of O(( D + U ) nnz k + mk ), where nnz is the expected number of non-zero elements in any document vector.
neutral