id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_97700
First, we experiment with the model for joint learning of two classifiers coupled with thread-level inference (Section 3.1).
we give a pseudocode in Algorithm 1 that trains this model in an online fashion using feedback from the loopy belief propagation (LBP) inference algorithm (to be described later in Section 3.1.1).
neutral
train_97701
The dataset is split into training, development and test sets, with 2,600, 300, and 329 questions, and 16,541, 1,645, and 1,976 answers, respectively.
subsequent work has further merged Potential under BAD ( , and has used for evaluation F 1 with respect to the Good category (or just accuracy).
neutral
train_97702
We also included some rules to consider special anonymization tokens in the SMS dataset (Chen and Kan, 2013).
in this work, we focus on tackling these issues while making the following two main contributions: • We build a new corpus of SMS data that is fully annotated with noun phrase information.
neutral
train_97703
All models were built by us using Java, and were optimized with L-BFGS.
the texts are highly informal and noisy, with misspelling errors and without grammatical structures.
neutral
train_97704
Only exact matches are considered correct.
this in turn will give us new and more realistic data which we can use to extend the corpus and to improve the semantic parser.
neutral
train_97705
Second, we show that a parser read off the corpus achieves promising parsing accuracy and can be used to adapt SMT to multilingual database access.
issuing a query that is executable against the OSM database still requires detailed knowledge of database internals, something that cannot be expected from a layman user.
neutral
train_97706
In the disC-trained models, the error signal within a time step (i.e.
table 1 displays some statistics of our dataset.
neutral
train_97707
DENSIFIER differs from this work in that it does not need a text corpus, but can transform existing, publicly available word embeddings.
we then perform stochastic gradient descent (SGD).
neutral
train_97708
This suggests that a single dimension is sufficient to encode all sentiment information needed for sentiment lexicon creation.
for frequency, we exploit the fact that word2vec stores words in frequency order; thus, the ranking provided by word2vec is our lexicon resource for frequency.
neutral
train_97709
Simple morphological features ( §4.4) also seem to be meaningful.
for that, we consider the view annotation in the MPQA corpus.
neutral
train_97710
We also add problem-specific constraints that specify which clusters cannot be merged together, but instead of manually creating the cannot-links between specific words, our cannot-link constraints are automatically calculated during the clustering process.
a few methods have been proposed to identify implicit features, e.g., using co-occurrence associations between implicit and explicit features (Su et al., 2006;Hai et al., 2011;Zhang and Zhu, 2013), or leveraging lexical relations of words in dictionaries (Fei et al., 2012).
neutral
train_97711
paraphrases and semantic classes of heads and modifiers, but also proposed useful new features tailored to our task.
for example, heads having the hypernym Verbrechen (crime) are typically contained in compounds whose modifiers represent neither a holder nor a target, such as Steuervergehen (tax offense) or Autodiebstahl (car theft).
neutral
train_97712
As an implementation, we use SVM light (Joachims, 1999).
the aim has been to predict verbs for those compounds that match those abstract relations (e.g.
neutral
train_97713
(The Pearson correlation coefficient between the two sets of sentiment scores for each lexicon was also at least 0.98.)
the created lexicons capture sentiment associations at a fine level of granularity.
neutral
train_97714
The annotators were presented with four terms at a time, and asked which term is the most positive (or least negative) and which is the most negative (or least positive).
as the difference in sentiment starts getting larger, the frequency with which the two terms are chosen as most positive begins to diverge.
neutral
train_97715
Such a verb resource can be useful to aid KB relation extraction.
3), where we do not incorporate constraints and classify each test instance into a single relation.
neutral
train_97716
Then, we apply convolution and pooling to each of the contexts separately.
we refer to this subset as NEwS ⊂ .
neutral
train_97717
We call this model CNNpieceExt.
the extraction of structured information from natural language text is challenging because one relation can be expressed in many different ways.
neutral
train_97718
Parallel approaches do exist (Snoek et al., 2012;González et al., 2015), but we find it easy enough to harness parallel computation in decoding tuning sets and by decoupling BLEU measurements from speed measurements.
the maximum number of hypotheses allowed to survive histogram pruning in each decoding stack.
neutral
train_97719
We address this by distributing computational paths according to different translation pairs over multiple GPUs, following (Ding et al., 2014).
since this attention mechanism was introduced to the encoder-decoder network for machine translation, neural machine translation, which is purely based on neural networks to perform full end-to-end translation, has become competitive with the existing phrase-based statistical machine translation in many language pairs Gulcehre et al., 2015;Luong et al., 2015b).
neutral
train_97720
So far we have considered a conditional model of the target given the source, modelling p(t|s).
these correspond to local diagonal alignments or one-to-many alignments, respectively.
neutral
train_97721
These sums represent the total alignment score for the surrounding source words, similar to fertility in a traditional latent variable model, which is the sum over binary alignment random variables.
it is well established for latent variable translation models that the alignments improve if p(s|t) is also modelled and the inferences of both directional models are combined -evidenced by the symmetrisation heuristics used in most decoders , and also by explicit joint agreement training objectives (Liang et al., 2006;Ganchev et al., 2008).
neutral
train_97722
All of the exposition and results in this paper use this factorization, though many of the techniques we present later could be applied easily to the other factorizations described in .
indeed, in Section 5.3 we present quantitative and qualitative analysis of our results which further confirms this hypothesis: the LSTM and USchema models each perform better on different pattern lengths and are characterized by different precision-recall tradeoffs.
neutral
train_97723
It assumes that all vectors of entities and relations lie on a single vector space.
in Figure 1-(a), e 1 , e 2 , and e 3 are placed linearly.
neutral
train_97724
That is, when a triple (h, r, t) is given, h and t plays different roles.
for this, it introduces projection vectors for entities and relations, and then constructs the mapping matrices by multiplying these entity and relation projection vectors.
neutral
train_97725
Due to the simplicity of role-specific projection of entity vectors, it can be applied to various translation-based embeddings.
the existing embeddings treat them equally and embed them into a space in the same way.
neutral
train_97726
For this purpose, we adopt a head and a tail space mapping matrices of M h ∈ R n×n and M t ∈ R n×n .
the improvement in FB15K is remarkable.
neutral
train_97727
We also aim for the goal that similarity values of all found important word in- teractions should be maximized.
neural networks and distributed representations can alleviate such sparsity, thus neural network-based models are widely used by recent systems for the STS problem (He et al., 2015;Tai et al., 2015;Yin and Schütze, 2015).
neutral
train_97728
We are grateful for support from NSF Award 1464553 and the DARPA LORELEI Program.
our model has two main parts: an encoder and a decoder.
neutral
train_97729
(source) language-independent, emphasising general effects of the process of translation.
(2013), we estimate surprisal in three ways, at the word, part-of-speech and syntax levels, based on ngram language models and language models trained on unlexicalised part-of-speech sequences and flattened syntactic trees.
neutral
train_97730
While we describe how Translationese and Interpretese are different and characterize how they differ, the contribution of our work is not just examining an interesting, important dialect.
our hypothesis is that tactics used by interpreters roughly fall in two non-exclusive categories: (i) delay minimization, to enable prompt translation by arranging target words in an order similar to the source; (ii) memory footprint minimization, to avoid overloading working memory by reducing communicated information.
neutral
train_97731
The input layer consists both source and target language word, which is in one-hot representation.
(3) If current source/target word is not aligned to any target/source words, we introduce a null token in its opposite side, and annotate this word pair as f ollow reordering type.
neutral
train_97732
In order to include more context information for determining reordering, we propose to use a recurrent neural network, which has been shown to perform considerably better than standard feed-forward architectures in sequence prediction (Mikolov et al., 2011).
for other orientation types, such as LR and MSLR are also widely used, whose definition can be found on Moses official website 1 .
neutral
train_97733
Then we employ an n-gram language model (LM) to score these candidates and select the one with the lowest perplexity as final result.
another option is to pre-process the input sentence by inserting possible DPs with the DP generation model (Section 2.2) so that the DP-inserted input (Input ZH+DPs) is translated.
neutral
train_97734
For example, Chen and Ng (2013) propose an SVM classifier using 32 features including lexical, syntax and grammatical roles etc., which are very useful in the ZP task.
the general method is to make the input with N -best DPs into a confusion network.
neutral
train_97735
Above, u and v are the parameters of the model, and h a and h p are learned feature embeddings of the local mention context and the pairwise affinity between a mention and an antecedent, respectively.
yet, state-of-the-art performance can be achieved with systems treating each mention prediction independently, which we attribute to the inherent difficulty of crafting informative clusterlevel features.
neutral
train_97736
Singleton detection examines whether a phrase belongs to a coreference chain regardless of being anaphor or antecedent.
it is a very challenging task in natural language processing and it is still far from being solved, i.e.
neutral
train_97737
With different approximation methods, (4) will have different equivalent forms, e.g.
using only word embeddings is not sufficient to represent complex lexical features (e.g.
neutral
train_97738
Finally, in the TOEFL benchmark, all contexts except for SUB, perform comparably.
13 3) Coreference Resolution (COREF) We used the Berkeley Coreference System (Durrett and Klein, 2013), which achieves near state-of-the-art results with a log-linear supervised model.
neutral
train_97739
Since the size of English vocabulary W may be up to 10 6 scale, hierarchical softmax and negative sampling (Mikolov et al., 2013b) are applied during training to learn the model efficiently.
moreover, Chinese characters are more ambiguous than words.
neutral
train_97740
We propose a similaritybased method to learn Chinese word and character embeddings jointly.
using CBOW to learn Chinese word embeddings directly may have some limitations.
neutral
train_97741
In this work, we advocate another approach and show that it is simpler and more effective to ignore unattached words and many-to-many alignments: we claim that training a parser from a corpus of highquality annotated (albeit partially) data will result in better parsing performances than a parser trained from fully-annotated but noisy data.
this solution comes at the expense of deleting words or creating fake dependencies in the target sentence, which may introduce unreliable annotations in the target data.
neutral
train_97742
Our results so far are sobering: shortly after a static model is deployed performance degrades to a model using two orders of magnitude less training data (compare the drop in §6.2 with Figure 1).
the hour of the day has much more significant impact on accuracy; some times of the day are significantly easier and harder than the average.
neutral
train_97743
Therefore, we conclude that the gains we obtained through interlocking the phrases, could not have been obtained by simply increasing the amount of searching performed by the baseline system.
their experiments on translating Arabic-English text from the news domain were encouraging.
neutral
train_97744
This supports the hypothesis that reading patterns can help to distinguish good from bad translations.
combinations with BLEU When we combined BLEU with the translation jumps, we observed an increment in the τ to 0.37.
neutral
train_97745
One of the issues arises from inexact algorithms adopted in order to solve the hard joint search problem.
if we use the existing parsers to only predict unlabeled trees, we also obtain speed improvement, even for the highly speed-optimzed Stanford Neural Parser.
neutral
train_97746
This framework looks appealing in order to test our assumption that segmentation and parsing are mutually informative, while leaving the exact flow of information to be learned by the system itself: we do not postulate any priority between the tasks nor that all attachment decisions must be taken jointly.
the good scores for Sequoia could be explained by the larger MWE coverage.
neutral
train_97747
Such a lexical analysis is particularly relevant to perform deep semantic analysis.
the lexical dimension tends to help syntactic predictions.
neutral
train_97748
Word-sentiment associations are commonly captured in sentiment lexicons.
the oracle 'majority label' baseline assigns to all instances the most frequent polarity label in the dataset.
neutral
train_97749
Our improvements are primarily due to better performance on unseen words.
we do not use this data for training, but only for evaluation, so our experiments use unsupervised (or weakly supervised) domain adaptation.
neutral
train_97750
Using unlabeled data to estimate a target distribution for importance sampling, or for semi-supervised learning (Søgaard, 2013), as well as wide-coverage, crowd-sourced tag dictionaries to obtain more robust predictions for out-of-domain data have been succesfully used for domain adaptation Hovy et al., 2015a;Li et al., 2012).
we also see an average increase in performance on known words of 1% for both systems.
neutral
train_97751
Both extracted bitext sets also contained many duplicate sentence pairs.
this incurs a computational cost that could be significant for large collections such as Gigaword.
neutral
train_97752
More specifically, we use the formulaic similarity between He and Eu: He(p, q) ≡ Eu(x, y), when ∀i : i = 1, n of x i and y i , x i = √ p i and y i = √ q i , and compute He distance using Eu based, approximate NN computation approaches such as k-d trees 1 (Bentley, 1975).
the primary experimental comparison that we perform is between no bitext at all and a system trained with some bitext.
neutral
train_97753
SMT systems generate scored candidates and select a sentence having the highest score as the translation result.
in the experimental results of system combination , recall increases but precision declines with respect to original SMT results.
neutral
train_97754
(R. Riggs) We have analyzed stylistic patterns in quotations.
(A. Einstein) Cross-Sentence Conjunction: For both Quotations 1 and Non-Quotes 1, the high-level syntax feature with the highest χ 2 value was CC + NP + VP + ..
neutral
train_97755
News: The vice president threatened action against any who question the legality of delaying the swearing-in of President Hugo Chvez, who is still in Cuba.
by incorporating the crossgenre knowledge to tweets, we are able to formulate the task of event extraction on tweets as the task of cross-genre extraction for tweets and news articles.
neutral
train_97756
Our approach is implemented as a single stateful feature function in Moses (Koehn et al., 2007), which we will submit back to the community.
we experiment with word penalties based on either morphemes or desegmented words.
neutral
train_97757
We choose to train GloVe (Pennington et al., 2014) vectors on a corpus comprised of Gigaword and Wikipedia to learn dense representations of 2000 dimensions for English and French.
we review the monolingual models, before introducing our novel bilingual formulation.
neutral
train_97758
Some MCI patients may even recover, but all AD patients transition through the MCI stage before developing frank dementia (Petersen et al., 2001).
the web site offers a basic interface to display the scans and to transcribe their contents.
neutral
train_97759
The size of the longest and shortest sentences, min-max sentence length, were used as features as well as the average of the length of all sentences occurring in the description.
such tests may be insensitive to early linguistic decline, when anomalies are already detectable by patients' families (Key-DeLyria, 2013).
neutral
train_97760
When ensembleing very similar runs, such as runs submitted by the same team, the diversity of the systems is compromised and may lower the ensemble quality.
the 0-hop query from which it was derived must both exist and be correct.
neutral
train_97761
(2015), which is neither as fast nor as easy to implement.
nCE in its standard form is not suitable for GPUs, as the computations are not amenable to dense matrix operations.
neutral
train_97762
MRR is calculated by: where S is the set of similes.
we decided not to use these candidate generation methods.
neutral
train_97763
See Table 1 for the query terms with the most albums returned.
automatic evaluation metrics are useful to quickly benchmark progress.
neutral
train_97764
But the uses only need to perform one type of actions, which might be more suitable to be performed by a single human translator.
this result demonstrates that picking the critical error to be revised is critical in our PR framework.
neutral
train_97765
The stack method can be quite costly, given that it increases the size of several matrices, either in the recurrent unit (for input) or the output mapping for word generation.
this information can be highly informative, for instance, keywords, titles or descriptions, often include central topics which will be helpful in modelling or understanding the document text.
neutral
train_97766
We define context words to mean the surrounding words of a given word.
our training data for word embeddings is Wikipedia for English, downloaded on November 29, 2014.
neutral
train_97767
The average number of embeddings for a word is 1.86 for K = 10.
we propose to exploit only context information to distinguish different concepts behind words in this paper.
neutral
train_97768
To compute the similarity between the vector representation of the input sentences, our network uses two methods: (i) computing the similarity score obtained using a similarity matrix M (explored in (Yu et al., 2014)), and (ii) directly modelling interactions between intermediate vector representations of the input sentences via fully-connected hidden layers (used by (Hu et al., 2014)).
to the best of our knowledge, two of the most effective methods for engineering features are: (i) kernel methods, which naturally map feature vectors or directly objects in richer feature spaces; and more recently (ii) approaches based on deep learning, which have been shown to be very effective.
neutral
train_97769
4.1) as the feature vector of SVM and + means that two embeddings were concatenated into a single vector.
the latter typically requires considerable effort especially when dealing with highly semantic tasks such as QA.
neutral
train_97770
It would be interesting to see how such states capture the meaning of questions.
the encoders pre-trained in this manner are subsequently fine-tuned according to the discriminative criterion described already in Section 3.
neutral
train_97771
When comparing between our models, Figure 2 shows that Transfer+EM consistently improves over the Direct Transfer, while the gains are more profound in the low-supervision scenario.
we select 14 prototypes (the most frequent word from each category) for the baseline, while our method only uses ten translation pairs.
neutral
train_97772
Among the two baseline systems, MEMM performs slightly better than SVM, showing a small benefit to structured prediction.
there are three broad types of templates: five lexical feature templates, eight affix feature templates, and three orthographic feature templates.
neutral
train_97773
As expected, the Early Modern English dataset (PPCEME) is considerably more challenging than the Modern British English dataset (PPCMBE): the baseline accuracy is 7% worse on the PPCEME than the PPCMBE.
many of the most frequent errors on in-vocabulary (IV) tokens are caused by mismatches in the tagsets or annotation guidelines, and may be difficult to address without labeled data in the target domain.
neutral
train_97774
SCL performs slightly better than Brown clustering and word2vec on IV tokens, but worse on OOV tokens.
to the PTB, there is no distinction between opening quotation mark and closing quotation mark in the PPCEME.
neutral
train_97775
The arithmetic mean of 50 samples after 5,000 iterations, with an interval of 100 iterations.
our idea is materialized in two Bayesian generative models.
neutral
train_97776
We show that an SVM classifier fails to separate creoles from non-creoles.
these features can be regarded as (statistical) universals table 3: top-5 feature types for each source according to per-feature factors of FACt.
neutral
train_97777
In his Bayesian generative model, each feature of a language has a latent variable which determines whether it is derived from an areal cluster or the tree.
in this way, we represent a first step toward understanding the complex process of creole genesis through statistical models.
neutral
train_97778
Second, we propose to model creole genesis with mixture models, which makes more sense than tree-building techniques.
this feature value only gained the ratio of 67.3%.
neutral
train_97779
This implies heterogeneous behavior of features in creole genesis.
we obtained 64 creoles and 541 non-creoles.
neutral
train_97780
AFFIXES: prefixes and suffixes of the word.
language identification and normalization are critical for POS tagging , which in turn is critical further down the pipeline for shallow parsing as evident in Table 5.
neutral
train_97781
5 Next, ignoring any language-specific factors, we would expect to observe a trend according to which the larger the corpus, the higher the correlation score.
we believe that this, in combination with word similarity tasks from the previous section, can give a reliable picture of the generic quality of word embeddings studied in this work.
neutral
train_97782
2015assume their sense graph to be an ontology, this graph can be based on any inventory of word-sense and sense-sense relationships.
), based on evidence that this model shows consistently strong performance on a wide array of tasks Levy et al., 2015).
neutral
train_97783
First, many essay-level within-task constraints are not enforced.
under Exact matching OUR system's performances on the RI-P and RI-R metrics are significant (p < 0.02), while under Approx matching, they are highly significant (p < 0.002).
neutral
train_97784
The reason is that we apply distant supervision to derive a robust resource from the metadata of debate portals only.
thereby, our approach serves as a starting point for bringing argumentation mining to practical applications like search engines.
neutral
train_97785
In (Wachsmuth et al., 2015), we studied the generality of sentiment-related argumentative structures across domains.
such approaches neither tend to be effective on documents from other domains, nor do they scale to applications that deal with huge document collections, such as search engines.
neutral
train_97786
Aside from debate portals, one such resource is given by Wikipedia talk pages.
their approach concentrates mainly on exploiting the debate portals for improving the classification of segment roles, with minor impact on argumentativeness.
neutral
train_97787
Afterwards the similarity is calculated as the cosine similarity between the block vectors.
while limited contextual features such as revision location have been utilized in prior work, such features are computed from the revision being classified but typically not its neighbors.
neutral
train_97788
Additional examples are provided in Table 3.
in general, positive interpretations generated from numbered roles (ARG 0 -ARG 4 ) are scored higher than the ones generated from modifiers (ARGM-ADV, ARGM-CAU, ARGM-DiR, etc.).
neutral
train_97789
We assigned 20% of instances to the test split, totalling 378 instances (Section 6).
obtain an accuracy of 65.5 using supervised machine learning and features derived from gold-standard linguistic information, and Blanco and Moldovan (2014) report an F-measure of 64.1.
neutral
train_97790
This suggests that when the hypothesis is an entailment of the premise, the mLSTM tends to forget the previous matching results.
we observe that good wordlevel matching results are generally "forgotten" but important mismatches, which often indicate a contradiction or a neutral relationship, tend to be "remembered."
neutral
train_97791
For α 2 , x is the frequency of r in training data, with k = .5.
exact inference will become intractable and we would need to resort to methods such as variational inference or sampling.
neutral
train_97792
Our Contribution: We propose an extension of the distributional idea for unsupervised iSRL to loosen the need for annotated training data.
we label the first one with SEMAFOR 1 , a FrameNet-style semantic parser.
neutral
train_97793
Hence, we introduce attention mechanism to extract such words that are important to the meaning of the sentence and aggregate the representation of those informative words to form a sentence vector.
w it with t ∈ [1, T ] represents the words in the ith sentence.
neutral
train_97794
We carry out experiments with three different classification methods: SVMs with averaged embeddings, the Convolutional Neural Network of Kim (2014), and a Long Short Term Memory recurrent neural network (Hochreiter and Schmidhuber, 1997).
all hyperparameters of the network are the same as used in the original paper (Kim, 2014): stochastic dropout (Srivastava et al., 2014) with p = 0.5 on the penultimate layer, 100 filters for each filter region with filter regions of width 2,3 and 4.
neutral
train_97795
Previous best classification results on TREC data is achieved by SVM using unigrams, bigrams, whword, head word, POS tags, hypernyms, WordNet synsets and a bunch of hand-coded rules.
the forget gate is able to decay the information according to the context rather than a fixed decaying weight in tensor product based CNNs.
neutral
train_97796
Experiments show that DSCNN consistently outperforms traditional CNNs, and achieves state-of-the-art results on several sentiment analysis, question type classification and subjectivity classification datasets.
cNN based models, as the second category, utilize convolutional filters to extract local features (Kalchbrenner et al., 2014;Kim, 2014) over embedding matrices consisting of pretrained word vectors.
neutral
train_97797
We consider three sets of word embeddings for our experiments: (i) word2vec 2 is trained on 100 billion tokens of Google News dataset; (ii) GloVe (Pennington et al., 2014) 3 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora.
the model comprises k weight vectors w 1 , w 2 , ...w k , each of which is associated with an instantiation of a specific filter size.
neutral
train_97798
While other three baseline methods all share the whole model between source/target domains but differ in the training schemes and performance.
there is no guarantee that two models having similar parameters yields similar output distributions.
neutral
train_97799
For each task, we take the whole source domain training set D s and 10% sentences of the target domain training set D t as training data.
a linear combination is then applied to each label-wise MMD to form La-MMD and the coefficient is set as µ y = 1. regularization term is to generally control overfitting: We will provide the model convergence and hyperparameter study in Section 5.1.
neutral