sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
arabic-english and chinese-english show that our method produces significant translation quality .
we conduct large-scale translation quality experiments on arabic-english and chinese-english .
we process the book text using freely available components of the dkpro framework .
for preprocessing the corpus , we use the stanford pos-tagger and parser included in the dkpro framework .
we have presented a novel approach to generating spoken dialogue strategies .
we report on a novel approach to generating strategies for spoken dialogue systems .
the most representative study is the group of patterns proposed by hearst .
the fundamental work for the pattern-based approaches is that of hearst .
for the twitter data set , we obtain a median error of 479 km , which improves on the 494 km error .
we predict the location of wikipedia pages to a median error of 11.8 km and mean error of 221 km .
for cos , we used the cbow model 6 of word2vec .
we used nwjc2vec 10 , which is a 200 dimensional word2vec model .
on the dataset of 100 songs , we showed that emotion recognition can be performed using either textual or musical features , and that the joint use of lyrics and music can improve significantly over classifiers that use only one dimension at a time .
through comparative experiments , we show that emotion recognition can be performed using either textual or musical features , and that the joint use of lyrics and music can improve significantly over classifiers that use only one dimension at a time .
in the second pass , the detailed information , such as name and address , are identified in certain blocks ( e . g . blocks labelled with personal information ) , instead of searching globally in the entire resume .
in the first pass , the general information is extracted by segmenting the entire resume into consecutive blocks and each block is annotated with a label indicating its category .
this dataset was created and employed for the sentiment analysis in twitter task in the 2013 editions of the semeval 4 workshop .
this system obtained highest scores in two recent international competitions on sentiment analysis of tweets -semeval-2013 task 2 and semeval-2014 task 9 .
word sense disambiguation is the task of assigning sense labels to occurrences of an ambiguous word .
word sense disambiguation is the process of selecting the most appropriate meaning for a word , based on the context in which it occurs .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
huang et al presented an rnn model that uses document-level context information to construct more accurate word representations .
reisinger and mooney and huang et al also presented methods that learn multiple embeddings per word by clustering the contexts .
bleu is a precision based measure and uses n-gram match counts up to order n to determine the quality of a given translation .
the bleu is a classical automatic evaluation method for the translation quality of an mt system .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
we adopted the case-insensitive bleu-4 as the evaluation metric .
inspired by this , yang et al introduced hierarchical attention networks where the representation of a document is hierarchically built up .
most recently , yang et al introduced hierarchical attention networks for document classification .
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
functional uncertainty machinery can be obtained without going beyond the power of mildly context-sensitive grammars .
it is also shown that the analyses provided by the functional uncertainty machinery can be obtained without requiring power beyond mildly context-sensitive grammars .
in this paper , we work on candidate generation at the character level , which can be applied to spelling error correction .
in this paper , we have proposed a new method for approximate string search , including spelling error correction , which is both accurate and efficient .
as justified in , a 6-tag set enables the crfs learning of character tagging to achieve a better segmentation performance than others .
our previous work shows that a 6-tag set enables the crfs learning of character tagging to achieve a better segmentation performance than others .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package .
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit .
friedman et al , 2002 ) use manual analysis to detect and characterize two biomedical sublanguages .
friedman et al use semiautomatic and manual analyses to detect and characterize two biomedical sublanguages .
since sarcasm is a refined and indirect form of speech , its interpretation may be challenging for certain populations .
sarcasm is defined as ‘ the use of irony to mock or convey contempt ’ 1 .
information extraction ( ie ) is a fundamental technology for nlp .
information extraction ( ie ) is the process of identifying events or actions of interest and their participating entities from a text .
our second approach is a neural state-transition system that explicitly learns the copy action .
the second approach is a neural state-transition system over a set of explicit edit actions , including a designated copy action .
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
the weights of the different feature functions were optimised by means of minimum error rate training .
all features were log-linearly combined and their weights were optimized by performing minimum error rate training .
word sense disambiguation ( wsd ) is a key enabling-technology .
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .
we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems .
we used the bleu score to evaluate the translation accuracy with and without the normalization .
in this paper we develop a framework for inducing non-linear features in the form of regression .
in this paper we show how to automatically induce non-linear features for machine translation .
for our baseline we use the moses software to train a phrase based machine translation model .
we used the phrasebased smt system moses to calculate the smt score and to produce hfe sentences .
this baseline uses pre-trained word embeddings using word2vec cbow and fasttext .
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages .
to compute statistical significance , we use the approximate randomization test .
in order to determine whether the results are statistically significant , we use the approximate randomization test .
pstfs provides a simple and unified mechanism for building high-level parallel nlp systems .
pstfs serves as an efficient programming environment for implementing parallel nlp systems .
the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized .
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
we parse the senseval test data using the stanford parser generating the output in dependency relation format .
we extract syntactic dependencies using stanford parser and use its collapsed dependency format .
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
our first layer was a 200-dimensional embedding layer , using the glove twitter embeddings .
we used 200 dimensional glove word representations , which were pre-trained on 6 billion tweets .
named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type .
named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding .
pos induction is a popular topic and several studies ( cite-p-13-1-4 ) have been performed .
consequently , pos induction is a vibrant research area ( see section 2 ) .
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training .
in this paper , we have explored using an automatic metric to decrease the cost of human evaluation .
in this work , we focus on using existing automatic metrics to decrease the cost of human evaluations .
more recently , features drawn from word embeddings have been shown to be effective in various text classification tasks such as sentiment analysis and named entity recognition .
word embeddings have also been effectively employed in several tasks such as named entity recognition , adjectival scales and text classification .
xue introduced a systematic study to tap the implicit functional information of ctb .
a systematic study to tap the implicit functional information of ctb has been introduced by xue .
we built a linear svm classifier using svm light package .
our framework was built with the cleartk toolkit with its wrapper for svmlight .
cv-em is a cross-validating instance of the well known em algorithm .
s-em is based on na茂ve bayesian classification and the em algorithm .
the translation outputs were evaluated with bleu and meteor .
the translation quality is evaluated by bleu and ribes .
we use the pre-trained word2vec embeddings provided by mikolov et al as model input .
we first use the popular toolkit word2vec 1 provided by mikolov et al to train our word embeddings .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
sentiment analysis is the natural language processing ( nlp ) task dealing with the detection and classification of sentiments in texts .
the penn discourse treebank is the largest available annotated corpora of discourse relations over 2,312 wall street journal articles .
the penn discourse treebank is the largest manually annotated corpus of discourse relations on top of one million word tokens from the wall street journal .
as compared to traditional multi-domain learning methods that are tuned to use a single “ best ” attribute .
experimentally , they outperform the multi-domain learning baseline , even when it selects the single “ best ” attribute .
the nnlm weights are optimized as the other feature weights using minimum error rate training .
the model parameters are trained using minimum error-rate training .
on the ecb + corpus , our model obtains better results than models that require significantly more pre-annotated information .
we achieve these gains despite the fact that our model requires significantly less pre-annotated or pre-detected information in terms of the internal event structure .
peng et al achieved better results by using a conditional random field model .
shen et al extended the hmm-based approach to make it discriminative by making use of conditional random fields .
in this setup , the classifier only correctly labelled 4 out of the 147 ironic tweets .
in this setup , the classifier only correctly labelled 4 out of the 147 ironic tweets as ironic .
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
coreference resolution is the next step on the way towards discourse understanding .
for instance , mihalcea et al studied pmi-ir , lsa , and six wordnet-based measures on the text similarity task .
mihalcea et al developed several corpus-based and knowledge-based word similarity measures and applied them to a paraphrase recognition task .
in a study examining online route descriptions to an imaginary follower based on a two-dimensional map , klippel et al found that participants tended to chunk decision points without directional change together .
klippel et al showed that in a 2d scenario in which the route was only gradually revealed in the form of a moving dot on a map , participants still made use of chunking .
coherence is a central aspect in natural language processing of multi-sentence texts .
since coherence is a measure of how much sense the text makes , it is a semantic property of the text .
we used kappa statistics to evaluate the annotations made by the annotators in the second phase .
we used the kappa statistics to measure inter-annotator agreement on unseen data which two experts annotated independently .
the translation quality is evaluated by case-insensitive bleu-4 metric .
the translation quality is evaluated by bleu and ribes .
glorot et al , proposed a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion .
glorot et al first propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion .
in this paper , we show the benefits of tightly coupling asr and search tasks and illustrate techniques .
in this paper , we have presented techniques for tightly coupling asr and search .
the nmt decoder selects a phrase from the phrase memory or a word from the vocabulary of the highest probability to generate .
then the nmt decoder scores phrases in the phrase memory and selects a proper phrase or word with the highest probability .
besides concentrating on isolated components , a few approaches have emerged that tackle conceptto-text generation .
a few approaches have emerged more recently that combine content selection and surface realization .
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
resolving cross-narrative temporal relationships between medical events is essential to the task of generating an event timeline from across unstructured clinical narratives .
cross-narrative temporal ordering of medical events is essential to the task of generating a comprehensive timeline over a patient ’ s history .
other terms used in the literature include implied meanings , implied alternatives and semantically similars .
other terms used in the literature include implied meanings , implied alternatives and semantically similar .
szarvas extended their methodology to use n-gram features and a semi-supervised selection of the keyword features .
szarvas extended the methodology of medlock and briscoe to use n-gram features and a semi-supervised selection of the keyword features .
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
in this paper , we present an empirical study of how to represent and induce the connotative interpretations that can be drawn from a verb predicate .
in this paper , we presented a novel system of connotative frames that define a set of implied sentiment and presupposed facts for a predicate .
in this paper , we describe how we use these various categories of auxiliary features to improve performance .
in this paper , we use swaf to more effectively combine several vqa models .
the significance tests were performed using the bootstrap resampling method .
the significance test was performed using the bootstrap resampling method proposed by koehn .
bunescu and mooney successfully demonstrated the use of shortest path dependencies between two entities to extract located relation .
bunescu and mooney designed a kernel along the shortest dependency path between two entities by observing that the relation strongly relies on sdps .
we use gibbs sampling for inference to both the parametric and nonparametric model .
hence , we use gibbs sampling by casella and george to estimate the underlying distributions .
in these methods was normally trained based on only document-level sentiment supervision .
however , these methods were normally trained under document-level sentiment supervision .
relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .
relation extraction ( re ) is the task of extracting semantic relationships between entities in text .
state of the art statistical parsers are trained on manually annotated treebanks that are highly expensive to create .
current state-of-the-art statistical parsers are trained on large annotated corpora such as the penn treebank .
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings .
we used the 200-dimensional word vectors for twitter produced by glove .
the authors propose a linear regression model to predict the valence value for content words .
the proposed models address the sentient shifting effect of sentiment , negation , and intensity words .
in this work , we present a minimal neural model for constituency parsing .
this paper presents a minimal but surprisingly effective span-based neural model for constituency parsing .
experiments ( section 5 ) show a statistically significant improvement of + 0 . 7 bleu points over a state-of-the-art forest-based tree-to-string system even with less translation rules .
medium-scale experiments show an absolute and statistically significant improvement of +0.7 bleu points over a state-of-the-art forest-based tree-to-string system even with fewer rules .
metonymy is a figure of speech , in which one expression is used to refer to the standard referent of a related one ( cite-p-18-1-13 ) .
metonymy is a pervasive phenomenon in language and the interpretation of metonymic expressions can impact tasks from semantic parsing ( cite-p-13-1-10 ) to question answering ( cite-p-13-1-4 ) .
we compared na茂ve bayes , linear svm , and rbf svm classifiers from the scikit-learn package .
within this subpart of our ensemble model , we used a svm model from the scikit-learn library .
extractive summarization is a task to create summaries by pulling out snippets of text form the original text and combining them to form a summary .
extractive summarization is a widely used approach to designing fast summarization systems .
in this paper , we describe our participation in the first shared task on automated stance detection .
in this paper , we presented our approach on automated stance detection based on stacked classifications .
task-oriented dialog systems help users to achieve specific goals with natural language .
end-to-end task-oriented dialog systems usually suffer from the challenge of incorporating knowledge bases .
we induce a topic-based vector representation of sentences by applying the latent dirichlet allocation method .
we use the term-sentence matrix to train a simple generative topic model based on lda .
to estimate the optimal 伪 j values , we train our maxent model using the sequential conditional generalized iterative scaling technique .
to train our models , we have used the sequential conditional generalized iterative scaling technique .
k枚nig and brill propose a hybrid classifier that utilizes human reasoning over automatically discovered text patterns to complement machine learning .
k枚nig and brill proposed a hybrid classifier that uses human reasoning over automatically discovered text patterns to complement machine learning .
we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
in this paper , we propose a new model based on the cbow , hence .
in this paper , we propose a new model based on the cbow , hence we focus attention on it .
we apply byte-pair encoding with 30,000 merge operations on the english sentences .
we use byte pair encoding with 45k merge operations to split words into subwords .
the function word feature set consists of 318 english function words from the scikit-learn package .
we use a set of 318 english function words from the scikit-learn package .
all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training .
the log-linear model is then tuned as usual with minimum error rate training on a separate development set coming from the same domain .
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence .
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence .
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
for english , we used the pre-trained word2vec by on google news .