sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
in this paper , we examine user adaptation to the system โ s lexical and syntactic choices in the context of the deployed .
|
we confirm prior results showing that users adapt to the system โ s lexical and syntactic choices .
|
adwords features can be used for defining message persuasiveness .
|
adwords gives us an appropriate context for evaluating persuasive messages .
|
for faster training , we employ an efficient parallel training strategy proposed by mcdonald et al .
|
we make use of a distributed training strategy for the structured perceptron that was first introduced in mcdonald et al .
|
toutanova and moore addressed the phonetic substitution problem by extending the initial letter-to-phone model .
|
toutanova and moore improved this approach by extending the error model with phonetic similarities over words .
|
but the neural networks leverage these features for improving tagging .
|
this observation is evidence that the neural network can find good representations for pos tagging .
|
in general , the use of modifier structures and the associated semantic interpretation component permits a good treatment of scoping problems involving coordination .
|
the system 's semantic interpretation component can in particular deal with scoping problems involving coordination .
|
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
|
relation extraction ( re ) is the task of extracting semantic relationships between entities in text .
|
we describe an endto-end generation model that performs content selection and surface realization .
|
similar to cite-p-11-1-0 , we also present an endto-end system that performs content selection and surface realization .
|
sentiment detection and classification has received considerable attention .
|
sentiment classification has seen a great deal of attention .
|
the ไฝ f are optimized by minimum-error training .
|
these models can be tuned using minimum error rate training .
|
xiao et al present a topic similarity model based on lda that produces a feature that weights grammar rules based on topic compatibility .
|
based on topic models , xiao et al present a topic similarity model for hpb system , where each rule is assigned with a topic distribution .
|
morphologically , arabic is a non-concatenative language .
|
moreover , arabic is a morphologically complex language .
|
text categorization is the classification of documents with respect to a set of predefined categories .
|
text categorization is a crucial and well-proven method for organizing the collection of large scale documents .
|
active learning processin this work , we are interested in selective sampling for pool-based active learning , and focus on uncertainty sampling .
|
in this work , we are interested in uncertainty sampling for pool-based active learning , in which an unlabeled example x with maximum uncertainty is selected for human annotation at each learning cycle .
|
word alignment is a crucial early step in the training of most statistical machine translation ( smt ) systems , in which the estimated alignments are used for constraining the set of candidates in phrase/grammar extraction ( cite-p-9-3-5 , cite-p-9-1-4 , cite-p-9-3-0 ) .
|
word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .
|
semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .
|
semantic parsing is the mapping of text to a meaning representation .
|
first of all , multiple source languages can be involved to increase the statistical basis for learning , a strategy that can also be used in the case of annotation projection .
|
one idea is to use multiple source languages to increase the statistical ground for the learning process , a strategy that can also be used in the case of annotation projection .
|
for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus .
|
we learn our word embeddings by using word2vec 3 on unlabeled review data .
|
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
|
the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .
|
we split each document into sentences using the sentence tokenizer of the nltk toolkit .
|
for this step we used regular expressions and nltk to tokenize the text .
|
for example , figure 1 shows a part of the variation n-grams found in the german tiger corpus .
|
as an example for these hierarchical relationships , figure 1 shows a german noun phrase taken from the german tiger corpus .
|
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
|
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
|
the combination of ck with this additional knowledge follows the best settings from wiegand and klakow .
|
following wiegand and klakow , this corpus is chosen as a training set .
|
for the vector of document , the th element of is closely related to the generation probability of based on the language model induced by document .
|
the value at each dimension of the vector is closely related to the generation probability based on the language model of the corresponding document .
|
cite-p-24-3-1 presented a conditional variational framework for generating specific responses .
|
cite-p-24-3-10 introduced latent responding factors to model multiple responding mechanisms .
|
the reranking parser of charniak and johnson was used to parse the bnc .
|
all of the english sentences were parsed using the charniak parser .
|
a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data .
|
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
|
in the seminal work by rubenstein and goodenough , similarity judgments were obtained from 51 test subjects on 65 noun pairs written on paper cards .
|
for english , rubenstein and goodenough obtained similarity judgements from 51 subjects on 65 noun pairs , a seminal study which was later replicated by miller and charles , and resnik .
|
mitchell et al were the first to employ distributional semantic models to predict neural activation in the human brain using data obtained via functional magentic resonance imaging .
|
mitchell et al were the first to demonstrate that distributional semantic models encode some of the patterns found in the fmri data .
|
psl is a new model of statistical relation learning and has been quickly applied to solve many nlp and other machine learning tasks in recent years .
|
psl is a new statistical relational learning method that has been applied to many nlp and other machine learning tasks in recent years .
|
daum่
and jagarlamudi use contextual and string similarity to mine translations for oov words in a high resource language domain adaptation for a machine translation setting .
|
daum่
and jagarlamudi , zhang and zong , and irvine et al use new-domain comparable corpora to mine translations for unseen words .
|
twitter sentiment classification , which identifies the sentiment polarity of short , informal tweets , has attracted increasing research interest ( cite-p-15-1-20 , cite-p-15-1-19 ) .
|
twitter sentiment classification has attracted increasing research interest in recent years ( cite-p-15-1-20 , cite-p-15-1-19 ) .
|
we used the char representation strategy proposed by ma and hovy where char embeddings are combined using a convolutional neural network .
|
in order to capture long-range syntactic information for accurate disambiguation in pre-parsing phase , we build a lstm-crf model inspired by the neural network proposed in ma and hovy .
|
in the remainder of the paper , section 2 describes the sentence planning task .
|
in the remainder of the paper , section 2 describes the sentence planning task in more detail .
|
for the mnist , timit , and cifar dataset , that the generalization gap is not due to overfitting or overtraining , but due to different generalization capabilities of the local minima .
|
keskar et al . observe for the mnist , timit , and cifar dataset , that the generalization gap is not due to overfitting or overtraining , but due to different generalization capabilities of the local minima the networks converge to .
|
for the support vector machine , we used svm-light .
|
as a classifier , we chose support vector machines .
|
we use the attention-based nmt model introduced by bahdanau et al as our text-only nmt baseline .
|
we implement the attention model introduced by bahdanau et al which was the main technique for a sequence decoding in the last few years .
|
boyd-graber et al integrate a model of random walks on the wordnet graph into an lda topic model to build an unsupervised word sense disambiguation system .
|
boyd-graber et al incorporate the synset structure in wordnet into lda for word sense disambiguation , where each topic is a random process defined over the synsets .
|
keyphrases also offers a programming framework for developing new extraction algorithms .
|
the uima-based architecture of dkpro keyphrases allows users to easily evaluate keyphrase extraction configurations .
|
we measure machine translation performance using the bleu metric .
|
we evaluate the translation quality using the case-sensitive bleu-4 metric .
|
linear combinations of word embedding vectors have been shown to correspond well to the semantic composition of the individual words .
|
it has previously been shown that word embeddings represent the contextualised lexical semantics of words .
|
finally , we extract the semantic phrase table from the augmented aligned corpora using the moses toolkit .
|
in order to do so , we use the moses statistical machine translation toolkit .
|
results revealed that morphological and spontaneous speech-based features have an essential role in distinguishing mci patients from healthy controls .
|
our results suggest that it is primarily morphological and speech-based features that help distinguish mci patients from healthy controls .
|
for input representation , we used glove word embeddings .
|
our word embeddings is initialized with 100-dimensional glove word embeddings .
|
dependency parse correction , attachments in an input parse tree are revised by selecting , for a given dependent , the best governor from within a small set of candidates .
|
dependencies in an input parse tree are revised by selecting , for a given dependent , the best governor from within a small set of candidates .
|
and proved to be helpful for both inexperienced and experienced users .
|
results showed that the system was effective for inexperienced and experienced users .
|
we also used word2vec to generate dense word vectors for all word types in our learning corpus .
|
for this experiment , we used word2vec on the same frwac corpus to obtain a dense matrix in which each word is represented by a numeric vector .
|
the system described in this paper is the grandchild of the first transition-based neural network dependency parser ( cite-p-22-3-1 ) , which was the university of geneva โ s entry in the conll 2007 multilingual dependency parsing shared task ( cite-p-22-1-7 ) .
|
the system described in this paper is a combination of a feature-based hierarchical lexicon and word grammar with an extended two-level morphology .
|
the popular method is to regard word segmentation as a sequence labeling problems .
|
one mainstream method is regarding word segmentation task as a sequence labeling problem .
|
we used the malt parser to obtain source english dependency trees and the stanford parser for arabic .
|
we used the stanford parser to generate dependency trees of sentences .
|
in the loss-augmented setting , the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions .
|
in this setting , loss functions need to be factorizable together with the feature representations for finding the max-violating constraints .
|
a sentiment lexicon is a list of words and phrases , such as โ excellent โ , โ awful โ and โ not bad โ , each of them is assigned with a positive or negative score reflecting its sentiment polarity and strength ( cite-p-18-3-8 ) .
|
sentiment lexicon is a set of words ( or phrases ) each of which is assigned with a sentiment polarity score .
|
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings .
|
we use the word2vec cbow model with a window size of 5 and a minimum frequency of 5 to generate 200-dimensional vectors .
|
bidirectional rnns capture dependencies from both directions , thus providing two different views of the same sentence .
|
bidirectional rnns capture dependencies from both directions , thus provide two different views of the same sentence .
|
underspecification is nowadays the standard approach to dealing with scope ambiguities in computational semantics .
|
underspecification is the standard approach to dealing with scope ambiguities in computational semantics .
|
in this paper , a novel language model , the binarized embedding language model ( belm ) is proposed to solve the problem .
|
the first contribution in this paper is that a novel language model , the binarized embedding language model ( belm ) is proposed to reduce the memory consumption .
|
we built a 5-gram language model from it with the sri language modeling toolkit .
|
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
|
we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data .
|
in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus .
|
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus .
|
for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit .
|
marcu and echihabi 2002 ) proposed a method to identify discourse relations between text segments using na่ve bayes classifiers trained on a huge corpus .
|
marcu and echihabi proposed a method for cheap acquisition of training data for discourse relation sense prediction .
|
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
|
the model was built using the srilm toolkit with backoff and good-turing smoothing .
|
word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context .
|
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .
|
only henrich and hinrichs enrich the output of morphological segmentation with information from germanet to disambiguate such structures .
|
only henrich and hinrichs enrich the output of morphological segmentation with information from the annotated compounds of germanet to disambiguate such structures .
|
sentence , our method utilizes dependency structures and japanese dependency constraints to determine the word order of a translation .
|
to resolve the problem of generating a grammatically incorrect sentence , our method uses dependency structures and japanese dependency constraints to determine the word order of a translation .
|
knight and marcu proposed a sentence compression method using a noisy-channel model .
|
we presented a maximum entropy model to extend the sentence compression methods described by knight and marcu .
|
word alignment is the task of identifying corresponding words in sentence pairs .
|
word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 ) .
|
in this paper , we present a latent variable model for one-shot dialogue response , and investigate what kinds of diversity .
|
in this paper , we present a latent variable model to generate responses to input utterances .
|
proposed discriminative models are capable of incorporating domain knowledge , by adding diverse and overlapping features .
|
we show that discriminative models outperform the existing generative models by incorporating diverse features .
|
our baseline system is an standard phrase-based smt system built with moses .
|
our phrase-based mt system is trained by moses with standard parameters settings .
|
much recent work on language generation has made use of discourse representations based on rhetorical structure theory .
|
many recent studies in natural language processing have paid attention to rhetorical structure theory , a method of structured description of text .
|
we trained a smt system on 10k french-english sentences from the europarl corpus .
|
we trained the syntax-based system on 751,088 german-english translations from the europarl corpus .
|
sentence compression is the task of compressing long , verbose sentences into short , concise ones .
|
sentence compression is the task of compressing long sentences into short and concise ones by deleting words .
|
for smt decoding , we use the moses toolkit with kenlm for language model queries .
|
we use kenlm 3 for computing the target language model score .
|
tang et al proposed a user-product neural network to incorporate both user and product information for sentiment classification .
|
tang et al was first to incorporate user and product information into a neural network model for personalized rating prediction of products .
|
we used the kenlm language model toolkit with character 7-grams .
|
we used 4-gram language models , trained using kenlm .
|
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
|
we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero .
|
by simplifying the previously-proposed instance-based evaluation framework we are able to take advantage of crowdsourcing services .
|
our framework simplifies a previously proposed โ instance-based evaluation โ method that involved substantial annotator training , making it suitable for crowdsourcing .
|
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
|
we use srilm for training a trigram language model on the english side of the training corpus .
|
conditional random fields are undirected graphical models trained to maximize the conditional probability of the desired outputs given the corresponding inputs .
|
conditional random fields are discriminatively-trained undirected graphical models that find the globally optimal labeling for a given configuration of random variables .
|
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
|
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing .
|
semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .
|
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) .
|
the reordering model was trained with the hierarchical , monotone , swap , left to right bidirectional method and conditioned on both the source and target language .
|
the reordering model was trained with the hierarchical , monotone , swap , left to right bidirectional method and conditioned on both source and target language .
|
in this paper , novel syntactic sub-kernels are generated from the generalized kernel for the task of relation extraction .
|
using the generalized kernel , we will also propose a number of novel syntactic sub-kernels for relation extraction .
|
our baseline system is an standard phrase-based smt system built with moses .
|
our baseline is an in-house phrase-based statistical machine translation system very similar to moses .
|
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
|
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
|
we use the moses software to train a pbmt model .
|
we use moses , an open source toolkit for training different systems .
|
wei and gulla modeled the hierarchical relation between product aspects .
|
wei and gulla , 2010 ) modeled the hierarchical relation between product aspects .
|
with the user and product attention , our model can take account of the global user preference and product characteristics .
|
with the user and product attention , our model can take account of the global user preference and product characteristics in both word level and semantic level .
|
other authors also report better results by using p-grams with the length in a range , rather than using p-grams of fixed length .
|
other authors also report better results by using n-grams with the length in a range , rather than using n-grams of fixed length .
|
in our study , we build a conditional probability model which will be described in detail .
|
in our study , we build a conditional probability model which will be described in detail in section 3.2.1 .
|
we use treetagger with the default parameter file for tokenization , lemmatization and annotation of part-of-speech information in the corpus .
|
we extract the part-of-speech tags for both source and translation sentences using treetagger .
|
we incorporate active learning into the biomedical named-entity recognition system to enhance the system ' s performance .
|
we incorporated mmr-based active machine-learning idea into the biomedical named-entity recognition system .
|
we used word2vec , a powerful continuous bag-of-words model to train word similarity .
|
we trained a continuous bag of words model of 400 dimensions and window size 5 with word2vec on the wiki set .
|
experimental results on four typical attributes showed that wikicike significantly outperforms both the current translation based methods and the monolingual extraction methods .
|
our experimental results demonstrate that wikicike outperforms the monolingual knowledge extraction method and the translation-based method .
|
we present the first neural endto-end solutions to computational am .
|
we present the first study on neural endto-end am .
|
in multimodal semantics , we evaluate on well known conceptual similarity and relatedness tasks and on zero-shot learning .
|
we use standard evaluations for multimodal semantics , including measuring conceptual similarity and cross-modal zero-shot learning .
|
word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .
|
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context .
|
studies have also shown that the learned embedding captures both syntactic and semantic functions of words .
|
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words .
|
zoph et al train a parent model on a highresource language pair in order to improve low-resource language pairs .
|
zoph et al use multilingual transfer learning to improve nmt for lowresource languages .
|
compressing deep models into smaller networks has been an active area of research .
|
compressing deep learning models is an active area of current research .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.