id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_20900
The rich get richer dynamic of the DP leads to the use of a compact set of rules that is an effective feature set for NLI (Swanson and Charniak, 2012).
this same property makes rare rules harder to find.
contrasting
train_20901
It is speculative to assign causes to the discriminative rules we report, and we leave quantification of such statements to future work.
the strength of the signal, as evidenced by actual counts in data, and the high level interpretation that can be easily assigned to the TSG rules is promising.
contrasting
train_20902
For an adult speaker of a language, word segmentation from fluid speech may seem so easy that it barely needed to be learned.
pauses in speech and word boundaries are not well correlated (Cole & Jakimik, 1980), word boundaries are marked by a conspiracy of partially-informative cues (Johnson & Jusczyk, 2001), and different languages mark their boundaries differently (Cutler & Carter, 1987).
contrasting
train_20903
This results in a 22081word corpus, 10% fewer tokens than in the original.
it does not substantially change the lexicon; the number of distinct word types only drops from 811 to 806.
contrasting
train_20904
In all four languages, we see a similar picture: the shared features approach is generally better when one of the treebanks is very small, while the guided parsing approach is better when the treebanks are more similar in size.
for most training set sizes the combination of the the two methods achieves a higher performance than either of them individually.
contrasting
train_20905
the treebank annotation style) is different.
domain adaptation and cross-treebank training can be seen as instances of the more general problem of multitask learning (Caruana, 1997).
contrasting
train_20906
In this work, f 1 , f 2 , and f s are identical: all of them correspond to the feature set described by Carreras (2007).
it is certainly imaginable that f s could consist of specially tailored features that make generalization easier.
contrasting
train_20907
Since the Talbanken treebank is twice as large as the Syntag treebank and has a surface-oriented representation that is easier to parse, this parser is useful as a guide for the Syntag parser: the improvements of the guided and combined Syntag parsers are statistically significant.
it is harder to improve the Talbanken parser, for which the baseline is much stronger.
contrasting
train_20908
Toutanova and Johnson (2008) (also, Ravi and Knight (2009)) use a simple method for predicting possible tags for unknown words: a set of 100 most common suffixes are extracted and then models of P(tag|suffix) are built and applied to unknown words.
these models suffer with an extremely small set of labeled data.
contrasting
train_20909
(2010) that good bigrams are those which have high coverage of new words: each newly covered node contributes additional (partial) counts.
by using the weights instead of full counts, we also account for the confidence assigned by LP.
contrasting
train_20910
In an unsupervised setting, there is no reason f-LDA would actually infer parameters corresponding to the three factors we have been describing.
the forums include metadata that can help guide the model: the messages are organized into forums corresponding to drug type (factor 1), and some threads are tagged with labels corresponding to routes of administration and other aspects (factors 2 and 3).
contrasting
train_20911
One option is to fix ω as η, forcing the component weights to match the provided weights.
in our case η will only be an approximation of the optimal component parameters since it is estimated from incomplete data (only some messages have tags) and the η vectors are learned using an approximate model (see below).
contrasting
train_20912
This is a two-dimensional model, since we explicitly model pairs such as (MEPHEDRONE,SNORTING) or (SALVIA,EFFECTS).
we also created word distributions for triples such as (SALVIA,ORAL,EFFECTS) by taking a mixture of the corresponding pairs: in this example, we estimate the unigram distribution from salvia documents tagged with either "Oral" or "Effects."
contrasting
train_20913
There are certainly some differences between our training set and our phrases.
the collected training samples was the closest available dataset to our purpose.
contrasting
train_20914
The metric described above only considers word overlap and ignores other semantic relations (e.g., synonymy, hypernymy) between words.
annotators write labels of their own and may use words that are not directly from the conversation but are semantically related.
contrasting
train_20915
On the blog corpus, our key phrase extraction method (Extraction-BL) fails to beat the other baselines (Lead-BL and Freq-BL) in majority of cases (except R-f1 for Lead-BL).
in the email dataset, it improves the performance over both baselines in both evaluation metrics.
contrasting
train_20916
Incorporating a better source of prior knowledge in the generalization phase (e.g., YAGO or DBpedia) is also an interesting research direction towards a better phrase aggregation step.
we plan to apply a ranking strategy to select the top candidate phrases generated by our framework.
contrasting
train_20917
In this paper, we describe an improved method for combining partial captions into a final output based on weighted A * search and multiple sequence alignment (MSA).
to prior work, our method allows the tradeoff between accuracy and speed to be tuned, and provides formal error bounds.
contrasting
train_20918
Most of the previous research on real-time captioning has focused on Automated Speech Recognition (ASR) (Saraclar et al., 2002;Cooke et al., 2001;Pražák et al., 2012).
experiments show that ASR systems are not robust enough to be applied for arbitrary speakers and in noisy environments (Wald, 2006b;Wald, 2006a;Bain et al., 2005;Bain et al., 2012;Cooke et al., 2001).
contrasting
train_20919
The algorithm is designed to be used online, and hence has high speed and low latency.
due to the incremental nature of the algorithm and due to the lack of a principled objective function, it is not guaranteed to find the globally optimal alignment for the captions.
contrasting
train_20920
Despite the significant reduction in the search space, the A * search may still need to explore a large number of nodes, and may become too slow for real-time captioning.
we can further improve the speed by following the idea of weighted A * search (Pohl, 1970).
contrasting
train_20921
WER has several nice properties such as: 1) it is easy to estimate, and 2) it tries to preserve word ordering.
wER does not account for the overall 'readability' of text and thus does not always correlate well with human evaluation (wang et al., 2003;He et al., 2011).
contrasting
train_20922
The widely-used BLEU metric has been shown to agree well with human judgment for evaluating translation quality (Papineni et al., 2002).
unlike WER, BLEU imposes no explicit constraints on the word ordering.
contrasting
train_20923
The algorithm can be tailored to real-time by using a larger heuristic weight.
we can produce better transcripts for offline tasks by choosing a smaller weight.
contrasting
train_20924
We evaluated our models on a set of retellings from 70 non-overlapping subjects with a mean age of 88.5, half of whom had received a diagnosis of MCI (test set).
to the unsupervised word-alignment based method, the method outlined here required manual story element labels of the retellings.
contrasting
train_20925
In offline scenarios where high latencies are permitted, several adaptation strategies (speaker, language model, translation model), denser data structures (Nbest lists, word sausages, lattices) and rescoring procedures can be utilized to improve the quality of end-to-end translation.
realtime speech-to-text or speech-to-speech translation demand the best possible accuracy at low latencies such that communication is not hindered due to potential delay in processing.
contrasting
train_20926
Ideally, one would like to train the models on entire talks.
such corpora are not available in large amounts.
contrasting
train_20927
We leverage the IWSLT TED campaign by using identical development (dev2010) and test data (tst2010).
english-Spanish is our target language pair as our internal projects are cater mostly to this pair.
contrasting
train_20928
For example, the input text we use will be chunked into the NC we and the VC use, which will be translated incorrectly as nosotros•usar; the infinitive usar is se-lected rather than the properly conjugated form usamos.
there is a marked improvement in translation accuracy with increasingly larger chunk sizes (lgchunk1, lgchunk2, and lgchunk3).
contrasting
train_20929
The non-linguistic segmentation using fixed word length windows also performs well, especially for the longer length windows.
longer windows (win15) increase the latency and any fixed length window typically destroys the semantic context.
contrasting
train_20930
Possible fixes to this problem include using a proper sentence level metric such a METEOR (Denkowski and Lavie, 2011) or a pseudo-corpus from the last few updates (Chiang et al., 2008).
in light of the result from section 3.1 that tuning on the dev set is still better than tuning on a held-out portion of the training data, we observe that tuning a corpus level metric on a highquality dev set from the same domain as the test set probably leads to the best translation quality.
contrasting
train_20931
Even someone who does not completely understand them can gain some meaning by observing common words such as "Oracle", "database" and "programming".
checking the Google results for "Oracle Database" or "programming languages", we will find little relatedness between them and C p .
contrasting
train_20932
It achieves 19.8% by micro-averaged F1 score, ranking 11th out of the 19 systems submitted to the competition (Kim et al., 2010).
by adding the structural features used by HUMB into CTI, we can improve the performance by around 6%, making our results close to that of HUMB.
contrasting
train_20933
However, this will not effect our method, because what it essentially measures is a term's informativeness among a list of terms appearing in the same context.
for keyword extraction, a topic with a rich literature, to the best of our knowledge, has no publicly available large scale datasets, which makes SemEval2010 the best available.
contrasting
train_20934
Here, we concentrate on developing and evaluating automatic procedures to learn the main concepts of a domain and at the same time auto-annotate texts so that they become available for training information extraction or text summarization applications.
it would be naive to think that in the current state of the art we would be able to learn all knowledge from text automatically (Poon and Domingos, 2010;Biemann, 2005;Buitelaar and Magnini, 2005).
contrasting
train_20935
We have found that the clustering-based procedure is very competitive when presented with gold chunks.
the iterative learning procedure performs very well when presented with automatic chunks in all tested domains and the two languages.
contrasting
train_20936
score(k) = w∈k TextRank(w) length(k) + 1 The small vocabulary size as well as the high redundancy within the set of related sentences are two factors that make keyphrase extraction easier to achieve.
a large number of the generated keyphrases are redundant.
contrasting
train_20937
It was shown to correlate significantly with human judgments (Clarke and Lapata, 2006) and behave similarly to BLEU (Unno et al., 2006).
this metric is not entirely reliable as it depends on parser accuracy and the type of dependency relations used (Napoles et al., 2011).
contrasting
train_20938
This model learns the cost of putting a word immediately before another word and finds the best reordering by solving an instance of the Traveling Salesman Problem (TSP).
for efficiently solving the TSP, the model is restricted to pairwise features which examine only a pair of words and their neighborhood.
contrasting
train_20939
If the exact segmentation of the source sentence was known, then the model could have used the information that the word gaadi {car} appears in a segment whose head is the noun chaalak {driver} and hence its not unusual to put gaadi {car} before bola {said} (because the construct "NP said" is not unusual).
since the segmentation of the source sentence is not known in advance, we use a heuristic (explained later) to find the segmentation induced by a reordering.
contrasting
train_20940
(2011) used only manually aligned data for training the TSP model.
we use machine aligned data in addition to manually aligned data for training the TSP model as it leads to better performance.
contrasting
train_20941
Translation models in statistical machine translation can be scaled to large corpora and arbitrarily-long phrases by looking up translations of source phrases "on the fly" in an indexed parallel corpus using suffix arrays.
this can be slow because on-demand extraction of phrase tables is computationally expensive.
contrasting
train_20942
The feature parameters Φ can be roughly divided into two categories: dense feature that measures the plausibility of each translation rule from a particular aspect, e.g., the rule translation probabilities p(f |e) and p(e|f ); and sparse feature that fires when certain phenomena is observed, e.g., when a frequent word pair co-occured in a rule.
to λ, feature parameters in Φ are usually modeled by generative models for dense features, or by indicator functions for sparse ones.
contrasting
train_20943
An alternative approach is to use log-linear interpolation, so that the interpolation weights can be easily optimised in tuning Bertoldi and Federico, 2009;Banerjee et al., 2011).
this effectively multiplies the probabilities across phrase tables, which does not seem appropriate, especially for phrases absent from 1 table.
contrasting
train_20944
All of the NLP challenges of MSA (e.g., optional diacritics and spelling inconsistency) are shared by DA.
the lack of standard orthographies for the dialects and their numerous varieties pose new challenges.
contrasting
train_20945
We did not use a language model to pick the best path; instead we kept the ambiguity in the lattice and passed it to our SMT system.
in this paper, we run ELISSA on untokenized Arabic, we use feature, lemma, and surface form transfer rules, and we pick the best path of the generated MSA lattice through a language model.
contrasting
train_20946
The Penn Treebank is composed of professionally-written news text from 1989, when minorities comprised 7.5% of the print journalism workforce; the proportion of women in the journalism workforce was first recorded in 1999, when it was 37% (American Society of Newspaper Editors, 1999).
twitter users in the USA contain an equal proportion of men and women, and a higher proportion of young adults and minorities than in the population as a whole (Smith and Brewer, 2012).
contrasting
train_20947
The internal differences within these social media -at least as measured by the distinctions drawn in Table 2 -are much smaller than the differences between these corpora and the PTB standard.
in the long run, the effectiveness of this approach will be limited, as it is clear from Figure 1 that social media is a moving target.
contrasting
train_20948
Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods.
while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (∼3) even with many (10+) processors.
contrasting
train_20949
(2012) still requires 5-6 hours to train a very fast parser.
with the increasing popularity of multicore and cluster computers, there is a growing interest in speeding up training via parallelization.
contrasting
train_20950
Dot product both sides of Equation 2 with unit oracle vector u: 2.
from the two bounds we have: thus within at most t ≤ R 2 /δ 2 minibatch updates MIRA will converge.
contrasting
train_20951
Similar intuitions have been used to motivate the acquisition of bilexical features from background corpora for improving parser accuracy.
previous work has focused on including these statistics as auxiliary features during supervised training.
contrasting
train_20952
(2011) extracted ngram counts from Google queries and a large corpus to improve the MSTParser.
to previous work, we refer to our approach as self-learning because it differs from self-training by utilising statistics found using an initial parse ranking model to create a separate unsupervised reranking component, without retraining the baseline unlexicalised model.
contrasting
train_20953
Finally, we know that w 1 and w 2 are in a sentence together but cannot assume that there is a dependency relation between them.
we can choose to think of each sentence as a fully connected graph, with an edge going from every lemma to every other lemma in the same sentence.
contrasting
train_20954
While their work also combines sentiment analysis with collaborative filtering, the purpose is to improve the accuracy of item recommendation.
we borrow the idea and technique of collaborative filtering to improve user relation mining from online text.
contrasting
train_20955
The S-Joint model performs significantly better, correctly guessing 2-3 items for each output avatar.
as we will see in the manual evaluation, many of the non-matching parts it produces are still a good fit for the query.
contrasting
train_20956
To obtain the supertagged dependency linkage, the most intuitive way is through a LTAG parser (Schabes et al., 1988).
this could be very slow as it has time complexity of O(n 6 ).
contrasting
train_20957
The PR algorithm limits the length of the summary to approximately 140 characters, the maximum length of a Twitter post.
often the summary sentence produced has extraneous parts that appear due to the fact that they appear frequently in the posts being summarized, but these parts make the summary malformed or too wordy.
contrasting
train_20958
For example, to create parallel Chinese-English training texts for translation of social media texts, it takes three minutes on average to translate an informally written social media text of eleven words from Chinese into English.
it takes thirty seconds to normalize the same message, a six-fold increase in speed.
contrasting
train_20959
Most previous work on normalization of social media text focused on word substitution (Beaufort et al., 2010;Gouws et al., 2011;Han and Baldwin, 2011;Liu et al., 2012).
we argue that some other normalization operations besides word substitution are also critical for subsequent natural language processing (NLP) applications, such as missing word recovery (e.g., zero pronouns) and punctuation correction.
contrasting
train_20960
All the above work focused on normalizing words.
our work also performs other normalization operations such as missing word recovery and punctuation correction, to further improve machine translation.
contrasting
train_20961
We manually analyzed the effect of our text normalization decoder on MT.
for example, given the un-normalized English test message "yeah must sign up , im in lt25", our English-Chinese MT system translated it into "对[yeah] 必须[must] 签 署[sign up] , im 在[in] lt25" our normalization decoder normalized it into "yeah must sign up , i 'm in lt25 ."
contrasting
train_20962
(2003) was the first to evaluate a model trained on incorrect usage as well as artificial errors for the task of correcting several different error types, including prepositions.
with limited training data, system performance was quite poor.
contrasting
train_20963
On a test set with few corrections (NUCLE), training on welledited text (and without using thresholds) performs particularly poorly.
when evaluating on the FCE test set which contains far more errors, training on well-edited text performs reasonably well (though statistically significantly worse than training on all of the Wikipedia errors).
contrasting
train_20964
This approach significantly increases the number of entries in the ReverseDictionary.
there is an issue in this approach.
contrasting
train_20965
For the extraction of gradable adjectives, we rely, on the one hand, on the part-of-speech labels JJR (comparative) and JJS (superlative).
we also consider adjectives being modified by either more or most.
contrasting
train_20966
We also experimented with other related weaklysupervised extraction methods, such as mutual information of two adjectives at the sentence level (or even smaller window sizes).
using conjunctions largely outperformed these alternative approaches so we only pursue conjunctions here.
contrasting
train_20967
Since they occur very frequently, one might exclude some of them by just ignoring the most frequent adjectives.
there are also other types of adjectives, especially pertainyms (political, federal, economic, public, American, foreign, local, military, financial and national) that appear on this list which could not be excluded by that heuristic.
contrasting
train_20968
Argument-Argument Reordering estimates the reordering probability between two arguments, i.e., argument-argument pattern on the source side (AA-S) and its counterpart on the target side (AA-T).
due to that arguments are driven and pivoted by their predicates, we also include predicate in patterns of AA-S and AA-T. Let's revisit Figure 2(a).
contrasting
train_20969
(2009; presented models that learn phrase boundaries from aligned dataset.
semantics motivated SMT has also seen an increase in activity recently.
contrasting
train_20970
As stressed, using derivation tree fragments allows the comparison to abstract away from interference by irrelevant modifiers, an issue with Dickinson and Meurers (2003).
in the context of IAA, this advantage of KBM plays out in a different way, in that it allows for a precise pinpointing of the inconsistencies.
contrasting
train_20971
First, the choice in sense inventory plays an important role in gathering high-quality annotations; fine-grained inventories such as WordNet often contain several related senses for polysemous words, which untrained annotators find difficult to correctly apply in a given context (Chugur et al., 2002;McCarthy, 2006;Palmer et al., 2007;Rumshisky and Batiukova, 2008;Brown et al., 2010).
many agreement studies have restricted annotators to using a single sense, which can significantly lower inter-annotator agreement (IAA) in the presence of ambiguous or polysemous usages; indeed, multiple studies have shown that when allowed, annotators readily assign multiple senses to a single usage (Véronis, 1998;Murray and Green, 2004;Erk et al., 2009;Passonneau et al., 2012b).
contrasting
train_20972
If the third annotator agrees with either of the first two, the instance is marked as a case of agreement.
the unadjudicated agreement for the dataset was 67.3 measured using pair-wise agreement.
contrasting
train_20973
Compound features are difficult to build on dense embeddings.
they are easy to induce from the sparse embedding clusters proposed in this paper.
contrasting
train_20974
Moreover, in the PARADISE framework, only quality measurement for the whole dialogue (or system) is allowed.
this is not suitable for using quality information for online adaption of the dialogue (cf.
contrasting
train_20975
Only users can give a rating about their satisfaction level, i.e., how they like the system and the interaction with the system.
user ratings are expensive as elaborated in Section 1.
contrasting
train_20976
The time complexity of LVM training has been addressed through parallel training algorithms (Wolfe et al., 2008;Chu et al., 2006;Das et al., 2007;Newman et al., 2009;Ahmed et al., 2012;Asuncion et al., 2011), which reduce LVM training time through the use of large computational clusters.
the memory cost for training LVMs remains a bottleneck.
contrasting
train_20977
Assigning the dissimilar documents earlier helps ensure that more greedy node selections are informed by these impactful assignments.
dISSIMILARITY has a prohibitive time complexity of O(T N 2 ), because we must compare T nodes to an order of N documents for a total of N iterations.
contrasting
train_20978
It is also important to consider the additional time required to execute the partitioning methods themselves.
in practice this additional time is negligible.
contrasting
train_20979
Zhu & Ibarra (1999) present theoretical results and propose techniques for the general partitioning task we address.
to that work, we focus on the case where the data to be partitioned is a large corpus of text.
contrasting
train_20980
These corrections include ( / / / A/Â/Ǎ/Ā), / y/ý and / h/h transformations.
mADA ARZ , as a codafication technique, uses the context of the word, which makes it a contextual modeling approach unlike CEC and mLE.
contrasting
train_20981
The lower MT scores and slower learning curve of the MTurk systems are both due to the lower quality of the translations, and to the mismatch with the professional development set translations (we discuss this issue further in §4.3).
by interpolation of the MT scores, we find that the same MT performance can be obtained by using twice the amount of MTurk translated data as professional data.
contrasting
train_20982
In theories stretching back to Karttunen (1976), indefinites function primarily to establish new discourse entities, and should be able to participate in coreference chains, but here the association with such chains is negative.
interactions explain this fact (see Table 4 and our discussion of it).
contrasting
train_20983
Each syllablesize segment contains the representation of exactly one vowel phoneme, so that the number of segments matches the number of syllables.
1 the hyphenation need not correspond exactly to the actual syllable breaks.
contrasting
train_20984
For a syllabification to be accepted, all its syllables must satisfy the four constraints.
if this results in rejection of all possible syllabifications, the constraints are gradually relaxed starting from the weakest.
contrasting
train_20985
If we had a sufficiently large training set of pronunciation-respelling pairs, we could train a machine learning algorithm to directly generate respellings for any strings of English phonemes.
such a training set is not readily available.
contrasting
train_20986
It is similar to to the phonemic respelling approach described in Section 4.3 in that it converts each phoneme to a letter sequence.
the mappings depend on adjacent phonemes, as well as on the CV pattern of the current syllable.
contrasting
train_20987
The main difference between the L2P model described in this section and the P2L model from Section 5.2 is that the input and output data are reversed.
the L2P model is not simply a mirror image of the P2L model.
contrasting
train_20988
Neither the context-sensitive respeller nor dictionary lookup seem to contribute much to eSpeak's performance.
disabling the P2L generator produces a significant drop in word accuracy, while removing the L2P correctness filter almost doubles the phoneme error rate.
contrasting
train_20989
We then calculate pairwise accuracy as the percentage of pairs w p , w n (w p ∈ P seeds and w n ∈ N seeds ) where PN w p > PN w n .
this metric does not address the case where the degree of a word in one stylistic dimension is overestimated because of its status on a parallel dimension.
contrasting
train_20990
These algorithms use examples from each domain to learn a general model that is also sensitive to individual domain differences.
many data sets include a host of metadata attributes, many of which can potentially define the domains to use.
contrasting
train_20991
The use of the word "slew", in this case, has negative connotations, but only if the whole statement is construed by the perspective of the reader to represent an opinion.
if Lloyd Hession is a provider of new network access control solutions, then the use of "open" may convert this negative context into a positive context.
contrasting
train_20992
While these types of heuristics support longer-distance syntactic relations, they tend to focus on cases where some form of semantic compositionality holds.
consider this sentence from the IT business press: The contract is consistent with the desktop computing Outsourcing deals Citibank awarded EDS and Digital Equipment in 1996. .
contrasting
train_20993
The reason f f ac outperforms f SC is that f SC primarily controls for over-coverage of any element not in the subset via the α saturation hyper-parameter.
it does not ensure that every non-selected element has good representation in the subset.
contrasting
train_20994
While the previous studies use in-domain data sets for simulation, it is quite common to collect large amounts of OOD text data from the web.
given the nature of web data, some kind of selection mechanism is needed to ensure quality.
contrasting
train_20995
Doing the same rank-based comparison for the CMs this time, we observe that the syllable and morph-based models have the same average rank of 1.5, whereas the word-based model has 2.8.
a closer look reveals that the syllable-based CM paired with NO-LM is an outlier because NO-LM approach allows variety at the output but when the unit of the confusion model is as small as syllables, it produces too much variety that deterio-rates the discriminative model.
contrasting
train_20996
IB1 achieves an optimal ranking after just one iteration and thereafter scores get worse due to semantic drift.
pruning helps avoid semantic drift for IB2, which attains an optimal score after 2 iterations and achieves relatively constant scores for several iterations.
contrasting
train_20997
In contrast, pruning helps avoid semantic drift for IB2, which attains an optimal score after 2 iterations and achieves relatively constant scores for several iterations.
during iteration 9 an incorrect pattern is kept and this at once leads to a drastic loss in accuracy, showing that semantic drift is only deferred and not completely eliminated.
contrasting
train_20998
In other words, iterative bootstrapping is not benefitting from the information provided by the additional labeled data, and thus has a poor learning performance.
for our method based on hitting times, the performance continually improves as the seed set size is increased.
contrasting
train_20999
In such a tagging scheme, indi-vidual binary classifiers for each tag are independent of one another.
recent studies have found merit in segmenting each message into functional units and assigning a single DA to each segment (Hu et al., 2009).
contrasting