id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_97200
Baseline-2 in the lower part of the table is the best individual model out of all six.
when scoring with a target left-to-right MTU Markov model (L2RT), we can score each partial hypothesis exactly at each step.
neutral
train_97201
Maximum likelihood models can be estimated from millions of sentences of bitext, but optimize a mismatched objective, predicting events observed in word aligned bitext instead of optimizing translation quality.
for each of these representation groups, all location groups (Between, Stack and Phrase) are employed.
neutral
train_97202
To affect reordering, each sparse feature template is re-applied with each hypothesis extension.
given the appropriate tuning architecture, the sparse feature approach is actually simpler in many ways than the maximum entropy approach.
neutral
train_97203
The impact of these reordering features is reduced slightly in the presence of more carefully tuned translation and language models, but they remain a strong contributor to translation quality.
there have not been nearly so many examples of helpful sparse features, especially for phrasebased systems.
neutral
train_97204
The second type consists of a verb and an attached prepositional phrase, retaining only the head noun of the embedded noun phrase.
the New Event Detection task differs from our event recognition task because we want to find all stories describing a certain type of event, not just new events.
neutral
train_97205
2 See Figure 1 for an example tagged sentence.
(12) can be decomposed into: where y ca and y ea respectively denotes Chinese and English named entity tags in a word alignment pair a. λ ycye = P (ycye) P (yc)P (ye) is the pointwise mutual information (PMI) score between a Chinese named entity tag y c and an English named entity tag y e .
neutral
train_97206
Then we randomly sampled sentences from the collected sentences as Neg so that |Neg| was about twice as large as |Pos|: 5,000,000 for English, 1,400,000 for Japanese, and 600,000 for Chinese.
the parent dependency subtrees are adjacent to the candidate phrases and represented by their surface form (f 8,1 ), base form (f 8,2 ), or POS (f 8,3 ).
neutral
train_97207
Proposed def is our method, which used TrDat for acquiring patterns (Section 2.1.2) and training.
the basic idea is actually language-independent.
neutral
train_97208
Second, evaluating with Freebase held-out data is biased.
also notice that our models not only learn to predict Freebase relations, but also approximately 4k surface pattern relations.
neutral
train_97209
By contrast, there columns are words, and rows are contextual features such as "words in a local window."
hence we also use various combinations, such as: Our models are parametrized through weights and latent component vectors.
neutral
train_97210
Notice that we can accurately predict the Xscientist-at-Y surface pattern relation in table 2, as well as the more general person/company (em-ployedBy) relation in table 1.
for us even entailment rules are just a by-product of our goal to improve prediction, and it is this goal we directly optimize for and evaluate.
neutral
train_97211
Our approach has a fundamentally different objective: we are not (primarily) interested in clusters of patterns or their semantic representation, but in predicting patterns where they are not observed.
our objective is to complete the matrix, whereas their objective is to learn better latent embeddings of words (which by themselves again cannot capture any sense of asymmetry).
neutral
train_97212
This is a sign of either a large variance in usage or some data set specific tendency, and in either case we can not make confident claims as to this feature's association with any native language.
german speakers disprefer a prepositional phrase followed by a comma at the beginning of the sentence, and Chinese speakers use this pattern more frequently than the other L1s.
neutral
train_97213
Future work could use these sequential cueing operations to investigate further claims of the dynamic recruitment hypothesis.
in any case, the source of this discrepancy presents an attractive target for future research.
neutral
train_97214
McElree (2001; 2006) has found that retrieval of any non-focused (or in this case, unconnected) element from memory leads to slower processing.
by applying the F and L rules to the observed sign and context, the parser is able to generate a consequent context.
neutral
train_97215
For example, the act of making breakfast may be interrupted by a phone call.
integrating two disjoint connected components should be expected to incur a processing cost due to the need to recall the current state of the superordinate sequence to continue the parse.
neutral
train_97216
This research has been carried out in the framework of the TermWise Knowledge Platform (IOF-KP/09/001) funded by the Industrial Research Fund, KU Leuven, Belgium.
we record the lemmatized form when available, and the original form otherwise.
neutral
train_97217
Including the baseline parsers, this gave us 24 parsers to evaluate on their respective test sets.
the oldest one is the talbanken or MAMBA treebank (Einarsson, 1976), which has later been reprocessed for modern use (Nilsson et al., 2005).
neutral
train_97218
Again, we see that guided parsing is less effective if the guide uses an annotation style that is hard to parse.
additionally, we trained parsers using both methods at the same time; we refer to these parsers as combined.
neutral
train_97219
the treebank annotation style) is different.
there is no point in trying to use domain adaptation methods assuming a covariate shift, e.g.
neutral
train_97220
single pass over the data: in contrast, EM requires a few tens of passes (certainly more than 10 passes, from the results in table 1).
the algorithm takes two inputs in addition to the set of skeletal trees.
neutral
train_97221
Collins (2003) reports an accuracy of 88.2 F 1 , which is comparable to the results in this paper.
to EM, the insideoutside algorithm is not required; however various operations such as calculating smoothing terms in the spectral method add some overhead.
neutral
train_97222
The most suitable image is selected from this set using a graph-based algorithm which makes use of textual information from the metadata associated with each image and features extracted from the images themselves.
those approaches used queries that were much smaller (e.g.
neutral
train_97223
Formally, φ t (the word distribution for tuple t) has a Dirichlet(ω ( t) ) prior, where for each word w in the vector,ω w is a log-linear function: (1) where ω (B) is a corpus-wide precision scalar (the bias), ω w is a corpus-specific bias for word w, and ω (k) t k w is a bias parameter for word w for component t k of the kth factor.
if the b value is near 0 for a particular triple, then it will have very low prior probability.
neutral
train_97224
In addition to providing useful output for this important public health task, our prior-enriched model provides a framework for the application of f-LDA to other tasks.
this approach has limitations in that most documents are missing labels (less than a third of our corpus contains one of the labels in Table 1) and many messages discuss several components, not just the one implied by the tag.
neutral
train_97225
We set this problem as a variant of the Textual Entailment (TE) recognition task (Mehdad et al., 2010b;Adler et al., 2012;Berant et al., 2011).
in this way, we can take advantage of the weights to make a more conservative decision in pruning the entailment chains.
neutral
train_97226
In all cases, there is a significant improvement (p < 0.05) after applying the aggregation phase over the extracted phrases (Extrac-tion+Aggregation).
we only use TFxIDF (Salton and McGill, 1986), position of the first occurrence (Frank et al., 1999) and phrase length as our features.
neutral
train_97227
Among the topic model based methods, TSM achieves the best results on all the three metrics.
as shown in Table 5, topic modelling based methods (i.e., Bayesseg, PLDa and TSM) outperform those using either TF or TF-IDF, which is consistent with previously reported results (Misra et al., 2009;Riedl and Biemann, 2012).
neutral
train_97228
In this paper we take a generative approach lying between PLDA and SITS.
we present a new hierarchical Bayesian model for unsupervised topic segmentation.
neutral
train_97229
This is different to other topic modelling approaches that run LDA as a precursor to a separate segmentation step (Misra et al., 2009;Riedl and Biemann, 2012).
instead of explicitly learning the segmentation, STMs just leverage the existing structure of documents from the given segmentation.
neutral
train_97230
Acknowledgments Funded by NSF awards IIS-1218209 and IIS-0910611.
we performed a study to evaluate the agreement of the three metrics with human judgment.
neutral
train_97231
Given K sequences of length N each, we can have O(N K) distinct words.
we must be able to estimate h pair (n) efficiently.
neutral
train_97232
Our application is similar to automatic speech recognition in that there is a single correct output, as opposed to machine translation where many outputs can be equally correct.
due to the incremental nature of the algorithm and due to the lack of a principled objective function, it is not guaranteed to find the globally optimal alignment for the captions.
neutral
train_97233
The performance of the model improves significantly as the WER reduces with adaptation.
the new features were y i z i , y i z i−1 and y i z i+1 .
neutral
train_97234
It reflects a syntactic tendency of class-specific words to occur utterance-initially, which shows the feasibility of the online AD system.
we observe that the partial replacement of words with POS tags indeed improves over the baseline model performance, by 1.5 points on ASR output and by 1.1 points on transcripts.
neutral
train_97235
This data can therefore be used for modeling H-C speech.
as reflected in Table 3, the H language model that leaves out the Fisher data actually performed better.
neutral
train_97236
We address the goal of achieving a system that balances translation accuracy and latency.
all the passes used the same LM.
neutral
train_97237
We also experiment with inserting text segmenters of various types between ASR and MT in a series of real-time translation experiments.
constrained model adaptation (cMA) was applied to the warped features and the adapted features were recognized in the final pass with the VTLN model.
neutral
train_97238
ϕ R which selects the first SF s ∈ S(v), such that C R (v, s) > 0 when traversing the trees T 1 .
in order to do so, we have compared the upper bound of the number of SF errors that can be corrected when using reranking and our approach.
neutral
train_97239
Another early attempt (Tillmann and Zhang, 2006) used phrase pair and word features in a block SMT system trained using stochastic gradient descent for a convex loss function, but did not compare to MERT.
changing the weight drastically for a feature that is non-zero for only one out of a million sentences has very little effect on translation metrics.
neutral
train_97240
An exception is the LSAspec from Kireyev (2009), based on latent semantic analysis (Deerwester et al., 1990), which is defined as the ratio of a term's LSA vector length to its document frequency and thus can be interpreted as the rate of vector length growth.
we use the set of all keywords for evaluation; otherwise a more complicated evaluation metrics for each dataset will be needed.
neutral
train_97241
4 Applications We first investigate CTI in a well defined setting.
then we could test our metric based on other search engines such as Google or Bing.
neutral
train_97242
Given a term t, define its universal context set U (t) = {c i } and the source of c i is S(c i ) = {d ij }.
for keyword extraction, a topic with a rich literature, to the best of our knowledge, has no publicly available large scale datasets, which makes SemEval2010 the best available.
neutral
train_97243
The numbers above are interesting because they provide intrinsic evaluation of the concept induction procedure, but they do not tell us much about their usability.
note that the expriment is performed in three domains for which such translations are manually annotated.
neutral
train_97244
The algorithm is executed a number of times (see Section 5.1 for parametrization of the algorithms) to learn all concepts in the set of summaries, and at each iteration a single concept is formed.
the obtained results were not assessed in a real information extraction scenario.
neutral
train_97245
This dendrogram closely relates to the language groupings described in (Heine & Nurse, 2000).
the result of this grouping is usually illustrated as a dendrogram; a tree diagram used to illustrate the arrangement of the groups produced by a clustering algorithm (Heeringa & Gooskens, 2003), whereas multidimensional scaling adds to illustrate the visualization of the language proximities in a 2-dimensional space.
neutral
train_97246
In the current research, we investigate the use of Levenshtein distance on orthographic transcriptions for the assessment of language similarities.
in conclusion, the paper discusses the results.
neutral
train_97247
The rule counts are then used to compute labeling probabilities P (s | t) and P (t | s) over left-handside usages of each source label s and each target label t. These are simple maximum-likelihood estimates: if #(s i , t j ) represents the combined frequency counts of all rules with s i ::t j on the left-hand side, the source-given-target labeling probability is: t∈T #(s i ::t) The computation for target given source is analogous.
we can thus write the extracted rule as: while the SAMT label formats can be trivially converted into joint labels X::t, X::t 1 +t 2 , X::t 1 /t 2 , X::t 1 \t 2 , and X::X, they cannot be usefully fed into the label collapsing algorithm because the necessary conditional label probabilities are meaningless.
neutral
train_97248
This division into 10 folds is done for reasons explained earlier in Section 2.1.
this is especially true for language pairs such as Urdu-English which have significantly different sentence structures.
neutral
train_97249
Our aim is to go beyond this limitation of the TSP model and use a richer set of features instead of using pairwise features only.
a detailed ablation test shows that of all the features used, the pos triplet features are most informative for reordering.
neutral
train_97250
These include Hierarchical models (Chiang, 2007) and syntax based models (Yamada and Knight, 2002;Galley et al., 2006;Liu et al., 2006;Zollmann and Venugopal, 2006).
we use machine aligned data in addition to manually aligned data for training the TSP model as it leads to better performance.
neutral
train_97251
An explicit representation of the model would have required nearly a terabyte of memory, but its implicit representation using the parallel text required only a few gigabytes.
second, we propose novel data structures and algorithms for phrase extraction ( §4) and scoring ( §5) that are amenable to GPU par-allelization.
neutral
train_97252
Note that in order to save space, the values stored in the arrays are sentence-relative positions (e.g., token count from the beginning of each sentence), so that we only need one byte per array entry.
finally, we arrive at line 7 in Algorithm 3, where we must compute feature values for each extracted phrase pair.
neutral
train_97253
GPUs have previously been applied to DNA sequence matching using suffix trees (Schatz et al., 2007) and suffix arrays (Gharaibeh and Ripeanu, 2010).
this can be slow because on-demand extraction of phrase tables is computationally expensive.
neutral
train_97254
Across different τ , we find that the first iteration provides most of the gain while the subsequent iterations provide additional, smaller gain with occassional performance degradation; thus the translation performance is not always monotonically increasing over iteration.
at 50% level of pruning, there is a loss about 0.
neutral
train_97255
The most popular algorithm for this weight optimisation is the line-search based MERT (Och, 2003), but recently other algorithms that support more features, such as PRO (Hopkins and May, 2011) or MIRA-based algorithms (Watanabe et al., 2007;Chiang et al., 2008;Cherry and Foster, 2012), have been introduced.
whilst the differences between promix and perplexity minimisation are not large on the nc test set (about +0.5 BLEU) the results have been demonstrated to apply across many language pairs.
neutral
train_97256
ELISSA uses a rule-based approach (with some statistical components) that relies on the existence of a DA morphological analyzer, a list of hand-written transfer rules, and DA-MSA dictionaries to create a mapping of DA to MSA words and construct a lattice of possible sentences.
the lack of standard orthographies for the dialects and their numerous varieties pose new challenges.
neutral
train_97257
(2010), who use it as a feature to predict how well a parser will perform when applied across domains.
i have worked and continue to work in both of these areas, so i make this argument not as a criticism of others, but in a spirit of self-reflection.
neutral
train_97258
(2001) proposed to replace non-standard words with "the contextually appropriate word or sequence of words."
twitter users in the USA contain an equal proportion of men and women, and a higher proportion of young adults and minorities than in the population as a whole (Smith and Brewer, 2012).
neutral
train_97259
This prevents us from using very large minibatches.
we instead explore the idea of "minibatch" for online large-margin structured learning such as perceptron and MIRA.
neutral
train_97260
On one hand, researchers have been developing modified learning algorithms that allow inexact search (Collins and Roark, 2004;Huang et al., 2012).
consider the Lagrangian: 2 Actually this relaxation is not necessary for the convergence proof.
neutral
train_97261
First, except for the largest minibatch size of 48, minibatch learning generally improves the accuracy of the converged model, which is explained by our intuition that optimization with a larger constraint set could improve the margin.
the communication overhead and update time are not included.
neutral
train_97262
First, except for the largest minibatch size of 48, minibatch learning generally improves the accuracy of the converged model, which is explained by our intuition that optimization with a larger constraint set could improve the margin.
online learning for NLP typically involves expensive inference on each example for 10 or more passes over millions of examples, which often makes training too slow in practice; for example systems such as the popular (2nd-order) MST parser (McDonald and Pereira, 2006) usually require the order of days to train on the Treebank on a commodity machine (McDonald et al., 2010).
neutral
train_97263
For example, consider the following sentences: (1) a.
an unlexicalised parser is also likely to be less biased to domains or genres manifested in the text used to train its original ranking model.
neutral
train_97264
Experiments with a wide variety of distributional word similarity measures revealed that WeightedCosine (Rei, 2013), a directional similarity measure designed to better capture hyponymy relations, performed best.
unlexicalised parsers avoid using lexical information and select a syntactic analysis using only more general features, such as POS tags.
neutral
train_97265
This leaves room for improvement in designing a system that can more easily adapt to previously unseen data.
this is largely due to the assumption that a domain-specific, tagged training set will not be available for most target domains.
neutral
train_97266
In summary our contributions are: (a) We automatically construct a bilingual lexicon of NEs paired with the transliteration/translation decisions in two domains.
we construct a classification-based framework to automate this decision, evaluate our classifier both in the limited news and the diverse wikipedia domains, and achieve promising accuracy.
neutral
train_97267
For tokens having Fr score between 0.5 and 0.6, the decision is not obvious.
we tackle this problem in the reverse direction (translating/transliterating English NEs into Arabic).
neutral
train_97268
Phillies defeat Dodgers to take the National League Championship series.
this problem often arises in general multi-document summarization.
neutral
train_97269
The CATiB filter also resolves some POS ambiguity given information in the CATiB POS tag.
every additional morphological filter has a positive impact and the improvement of the accuracy for full Buckwalter with each new filter ranged between 0.22% and 1.18% absolute except for the case filter, which adds almost 5%.
neutral
train_97270
We apply our baselines, TADA, TADA+filters and TADA+filters+MLE to the blind test set (see Table 2).
we will present here a preliminary error analysis of TADA's output to motivate the morphological filters presented next (Section 4.3).
neutral
train_97271
Our decoder for text normalization effectively integrates multiple normalization operations.
to avoid spurious candidates, we only generate w if |w| ≥ 3 and |w | − |w| ≤ 4.
neutral
train_97272
We randomly add, delete, and substitute punctuation symbols in formal texts with equal probabilities.
also, fixing different types of informal characteristics often depends on each other.
neutral
train_97273
In practice, this hypothesis producer can propose many spurious candidates w for an informal word w. As such, after we replace w by w in the hypothesis, we require that some 4-gram containing w and its surrounding words in the hypothesis appears in a formal corpus.
for example, given the un-normalized English test message "yeah must sign up , im in lt25", our English-Chinese MT system translated it into "对[yeah] 必须[must] 签 署[sign up] , im 在[in] lt25" our normalization decoder normalized it into "yeah must sign up , i 'm in lt25 ."
neutral
train_97274
A simple cross-validation approach can be used in case of very small data.
20}, and measured the perplexity of the data given to the model after convergence.
neutral
train_97275
Both algorithms were implemented in Java, and the code for both is almost identical, except for the set of instructions which computes the dynamic programming equation for propagating the beliefs up in the tree.
• For each a ∈ N , we have a parameter π a , which is the probability of a being the root symbol of a derivation.
neutral
train_97276
The minimal r required for an exact tensor decomposition can be smaller than m 2 .
in the probabilistic domain, approximation by means of regular grammars is also exploited by Eisner and Smith (2005), who filter long-distance dependencies on-the-fly.
neutral
train_97277
Recently, Ott et al.
(2011), we use Amazon's Mechanical Turk service to produce the first publicly available 1 dataset of negative deceptive opinion spam, containing 400 gold standard deceptive negative reviews of 20 popular Chicago hotels.
neutral
train_97278
Second, the effect of deception on the pattern of pronoun frequency was not the same across positive and negative reviews.
in this section we discuss our efforts to extend Ott et al.
neutral
train_97279
A two-tailed binomial test suggests that JUDGE 1 and JUDGE 2 both perform better than chance (p = 0.0002, 0.003, respectively), while JUDGE 3 fails to reject the null hypothesis of performing at-chance (p = 0.07).
first, as might be expected, negative emotion terms were more fre-quent, according to LIWC (Pennebaker et al., 2007), in our fake negative reviews than in the fake positive reviews.
neutral
train_97280
In this work we distinguish between two kinds of deceptive opinion spam, depending on the sentiment expressed in the review.
we create and divide 400 HITs evenly across the 20 most popular hotels in Chicago, such that we obtain 20 reviews for each hotel.
neutral
train_97281
The nonWikiRev systems perform inconsistently, heavily dependent on the characteristics of the test set in question.
firstly, a model trained directly on the corrections performs well across test sets.
neutral
train_97282
For each language, we choose up to 8, 000 source language words among those that occur in the monolingual data at least three times and that have at least one translation in our dictionary.
to that work, we use a seed bilingual lexicon for supervision and multiple monolingual signals proposed in prior work.
neutral
train_97283
The drop can be mostly explained by the fact that the two sentiment lexicons we use for evaluation are finite (i.e.
we found that the gradable adjectives are a proper subset of predicative adjectives, which is in line with the observation by (Bolinger, 1972, 21) that gradable adjectives (which he calls degree words) readily occur predicatively whereas nongradable ones tend not to.
neutral
train_97284
Figure 3: A translation example of the base HPB system (above) and the system with constraints (below).
• Consistent with the conclusion in Koehn et al.
neutral
train_97285
(2009; presented models that learn phrase boundaries from aligned dataset.
in this paper, we propose a two-level approach to exploiting predicate-argument structure reordering in a hierarchical phrase-based translation model.
neutral
train_97286
• Flattening parse trees further improves 0.4∼0.5 BLEU points on average for systems with our syntactic constraints.
they only considered the reordering between arguments and their predicates.
neutral
train_97287
Krippendorff (2004) recommends that an α of 0.8 is necessary to claim high-quality agreement, which is achieved by the MaxDiff methodology.
the unadjudicated agreement for the dataset was 67.3 measured using pair-wise agreement.
neutral
train_97288
For example, the time for tagging one sentence in English NER was reduced from 5.6 ms to 1.6 ms, shown in Table 6.
it is also necessary to build compound embedding features since they can better deal with rarewords and ambiguous words.
neutral
train_97289
Supervised learning methods have achieved great successes in the field of Natural Language Processing (NLP).
despite of this, same conclusions with chunking held.
neutral
train_97290
Only users can give a rating about their satisfaction level, i.e., how they like the system and the interaction with the system.
2010derived turn level ratings from an overall score applied by the users after the dialogue.
neutral
train_97291
(2011b) not only contains user ratings but also expert ratings which makes it a perfect candidate for our research presented in this paper.
in the data used for the experiments, the amount of occurrences of the ratings was not balanced (equal for all classes) which has been identified as the most likely reason for this effect.
neutral
train_97292
We present alternative approaches to each question below.
to scale to large corpora, we propose a novel BAtCH DISSIMILARItY method.
neutral
train_97293
For the larger corpora, BJAC reduces V max by over 50% compared to the baseline, and by 23% compared to Z&I.
training LVMs on massive corpora introduces computational challenges, in terms of both time and space complexity.
neutral
train_97294
Nanba and Okumura (1999) came up with a simple schema composed of only three categories: Basis, Comparison, and other Other.
all the dependency relations that appear in the citation context.
neutral
train_97295
To study the impact of using citation context in addition to the citing sentence on classification performance, we ran two polarity classification experiments.
takes a value of 1 if the current sentence starts with a conjunctive adverb (Furthermore, Accordingly, etc.
neutral
train_97296
running the same experiments on new subjects.
running the same experiments on new subjects.
neutral
train_97297
For example a problem instance could be learning to distinguish articles about Macintosh and motorcycles MAC-MOTORCYCLES (evaluated on the 20 Newsgroups test section) using labeled data from IBM-BASEBALL (the training section).
meta-analysis is applicable to experiments with multiple datasets.
neutral
train_97298
The lower MT scores and slower learning curve of the MTurk systems are both due to the lower quality of the translations, and to the mismatch with the professional development set translations (we discuss this issue further in §4.3).
we also show that adding a Mechanical Turk reference translation of the development set improves parameter tuning and output evaluation.
neutral
train_97299
The four experimental systems have reordering models that are trained on the first 25,000 sentences of the parallel news data that have been parsed with each of the tree-to-dependency conversion schemes.
the number of relation types used in the conversion schemes proves important.
neutral