sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we prepared pretrained word embeddings using skip-gram model .
we pre-trained word embeddings using word2vec over tweet text of the full training data .
blitzer et al investigate domain adaptation for pos tagging using the method of structural correspondence learning .
blitzer et al proposed structural correspondence learning to identify the correspondences among features between different domains via the concept of pivot features .
we use the moses smt toolkit to test the augmented datasets .
we used moses , a phrase-based smt toolkit , for training the translation model .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem .
the conll data set was taken from the wall street journal portion of the penn treebank and converted into a dependency format .
the latter was taken from the wall street journal portion of the penn treebank and converted into a dependency format .
as a measure of the working memory capacity , the japanese version of the reading span test was conducted .
as a measure of the working memory capacity , the japanese version of a reading span test was conducted .
katz and giesbrecht make use of latent semantic analysis to explore the local linguistic context that can serve to identify multiword expressions that have non-compositional meaning .
katz and giesbrecht use distributional semantics and lsa as a model of context similarity to test whether the local context of a mwe can distinguish its idiomatic use from literal use .
in this paper , we have studied polarity-bearing topics generated from the jst model and shown that by augmenting the original feature space with polarity-bearing topics , the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance .
we study the polarity-bearing topics extracted by jst and show that by augmenting the original feature space with polarity-bearing topics , the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95 % on the movie review data and an average of 90 % on the multi-domain sentiment dataset .
fry , 1955 fry , 1958 showed that intensity was a less effective cue than duration on the perception of linguistic stress patterns .
from a series of experiments , fry , 1955 fry , 1958 showed that duration is a consistent correlate of stress at the word level in english and that it is a more effective cue than intensity .
subjectivity in natural language refers to aspects of language used to express opinions , feelings , evaluations , and speculations and it , thus , incorporates sentiment .
in natural language , subjectivity refers to expression of opinions , evaluations , feelings , and speculations and thus incorporates sentiment .
word reordering knowledge needs to be incorporated into attention-based nmt .
we aim to capture word reordering knowledge for the attention-based nmt by incorporating distortion models .
to address this problem , we proposed the application of the online learning protocol to leverage users feedback and to tailor qe .
to tackle this issue we propose an online framework for adaptive qe that targets reactivity and robustness to user and domain changes .
transliteration is the task of converting a word from one writing script to another , usually based on the phonetics of the original word .
transliteration is the conversion of a text from one script to another .
the penn discourse treebank , developed by prasad et al , is currently the largest discourse-annotated corpus , consisting of 2159 wall street journal articles .
the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english .
hierarchical phrase-based translation was proposed by chiang .
hierarchical phrase-based translation was first proposed by chiang .
the promt smt system is based on the moses open-source toolkit .
it is a standard phrasebased smt system built using the moses toolkit .
the systems were tuned using a small extracted parallel dataset with minimum error rate training and then tested with different test sets .
their weights are optimized using minimum error-rate training on a held-out development set for each of the experiments .
sentiment classification is a useful technique for analyzing subjective information in a large number of texts , and many studies have been conducted ( cite-p-15-3-1 ) .
sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 ) .
we trained a linear log-loss model using stochastic gradient descent learning as implemented in the scikit learn library .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
but it also eliminates the need to directly predict the direction of translation of the parallel corpus .
an additional advantage of our approach is that it does not require an annotation of the translation direction of the parallel corpus .
for back-translation , we train a phrase-based smt system for each system in reverse direction .
for preposition and determiner errors , we construct a system using a phrase-based statistical machine translation framework .
in this work , we use the margin infused relaxed algorithm with a hamming-loss margin .
we select the cutting-plane variant of the margin-infused relaxed algorithm with additional extensions described by eidelman .
wan et al use a dependency grammar to model word ordering and apply greedy search to find the best permutation .
both wan et al and our system use approximate search to solve the problem of input word ordering .
word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context .
the language models were trained using srilm toolkit .
the srilm toolkit was used to build the 5-gram language model .
distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system .
previous work has shown that unlabeled text can be used to induce unsupervised word clusters which can improve the performance of many supervised nlp tasks .
semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures .
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) .
reasoning is the process of thinking in a logical way to form a conclusion .
reasoning is a crucial part of natural language argumentation .
the most widely used approach works at the word level .
previous such work operates at the word level .
for building the baseline smt system , we used the open-source smt toolkit moses , in its standard setup .
we compare the final system to moses 3 , an open-source translation toolkit .
in this study , we proposed a method for disambiguating verbal word senses using term weight learning .
this paper describes unsupervised learning algorithm for disambiguating verbal word senses using term weight learning .
word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context .
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
n , productions r , start symbol math-w-4-1-0-54 .
truncation size is set to math-w-14-8-0-55 .
this requires part-of-speech tagging the glosses , for which we use the stanford maximum entropy tagger .
we use stanford log-linear partof-speech tagger to produce pos tags for the english side .
our algorithm induces a forest of alignments from which we can efficiently extract .
our algorithm yields a forest of word alignments , from which we can efficiently extract the k-best .
we extract dependency structures from the penn treebank using the head rules of yamada and matsumoto .
we generate dependency structures from the ptb constituency trees using the head rules of yamada and matsumoto .
finally , we use the bigram similarity dataset from mitchell and lapata which has 3 subsets , adjective-noun , noun-noun , and verbobject , and dev and test sets for each .
specifically , we used the dataset from mitchell and lapata which contains similarity judgments for adjective-noun , noun-noun and verb-object phrases , respectively .
most relevant to our work is the state of the art in modal sense classification in ruppenhofer and rehbein .
we reconstruct the modal sense classifier of ruppenhofer and rehbein to compare against prior work .
translation performances are measured with case-insensitive bleu4 score .
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation .
the knowledge representation system kl-one was the first dl .
the knowledge representation system kl-one , was the first dl .
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features .
we use the word2vec skip-gram model to train our word embeddings .
we applied our algorithm to construct a semantic parser for freebase .
to test this capability , we applied the trained parser to natural language queries against freebase .
we convert the question into a sequence of learned word embeddings by looking up the pre-trained vectors , such as glove .
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .
the models are built using the sri language modeling toolkit .
the language models were trained using srilm toolkit .
as textual features , we use the pretrained google news word embeddings , obtained by training the skip-gram model with negative sampling .
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .
to evaluate performance we use the second half of the data set released by zeichner , berant , and dagan as a test set .
we use the data set released by zeichner , berant , and dagan , which contains 6,567 entailment rule applications annotated for their validity by crowdsourcing .
experimental results show that our proposed method outperforms the state-of-the-art methods .
the experimental results show that our method achieves better performance than the state-of-the-art methods .
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus .
in clark and curran we describe a discriminative method for estimating the parameters of a log-linear parsing model .
in clark and curran we describe efficient methods for performing the calculations using packed charts .
the key to our solution is the inversion transduction grammars , a type of synchronous context free grammar limiting reordering to adjacent source spans .
most related to our approach , wu used inversion transduction grammars-a synchronous context-free formalism -for this task .
here we use the discourse relation expansion as defined in the penn discourse treebank .
we use lists of discourse markers compiled from the penn discourse treebank and from to identify such markers in the text .
estimated on a large set of description-tags pairs , we build a word trigger method ( wtm ) to suggest .
based on this perspective , we build a simple word trigger method ( wtm ) for social tag suggestion .
our model is based on the standard lstm encoder-decoder model with an attention mechanism .
we use opennmt , which is an implementation of the popular nmt approach that uses an attentional encoder-decoder network .
the srilm toolkit was used to build this language model .
the language models were trained using srilm toolkit .
co-training has been successfully applied to various applications , such as statistical parsing and web pages classification .
co-training has been applied to a number of nlp applications , including pos-tagging , parsing , word sense disambiguation , and base noun phrase detection .
morfessor 2.0 is a new implementation of the morfessor baseline algorithm .
morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data .
in this paper , we have shown the evolution of action recognition datasets and tasks from simple ad-hoc labels .
in this paper , we provide a unified view of action recognition tasks , pointing out their strengths and weaknesses .
semantic relatedness is a very important factor for coreference resolution , as noun phrases used to refer to the same entity should have a certain semantic relation .
semantic relatedness is the task of quantifying the strength of the semantic connection between textual units , be they words , sentences , or documents .
erk introduced a distributional similarity-based model for selectional preferences , reminiscent of that of pantel and lin .
in a distributional similarity-based model for selectional preferences is introduced , reminiscent of that of pantel and lin .
in this paper we develop a baseline approach to identify and verify simple claims about statistical properties .
in this paper we developed a distantly supervised approach for identification and verification of simple statistical claims .
jeong , lin , and lee use semi-supervised boosting to tag the sentences in e-mail and forum discussions with speech acts by inducing knowledge from annotated spoken conversations .
jeong et al use semi-supervised learning to transfer dialogue acts from labeled speech corpora to the internet media of forums and e-mail .
we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg .
we tune phrase-based smt models using minimum error rate training and the development data for each language pair .
in this paper , we study the problem of obtaining partial annotation from freely available data .
in this paper , we investigate techniques for adopting freely available data to help improve the performance on chinese word segmentation .
neural networks have been successfully applied to nlp problems , specifically , sequence-to-sequence or models applied to machine translation and word-to-vector .
recurrent neural networks have successfully been used in sequence learning problems , for example machine translation , and language modeling .
we train probabilistic parsing models for resource-poor languages by transferring cross-lingual knowledge from resource-rich language .
we train probabilistic parsing models for resource-poor languages by maximizing a combination of likelihood on parallel data and confidence on unlabeled data .
a 5-gram language model of the target language was trained using kenlm .
an english 5-gram language model is trained using kenlm on the gigaword corpus .
the paper presents an application of structural correspondence learning ( scl ) ( cite-p-14-1-4 ) .
the paper presents an application of structural correspondence learning ( scl ) to parse disambiguation .
we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit .
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing .
to tackle this problem , hochreiter et al introduced an architecture , called long short-term memory that allows to preserve temporal information , even if the correlated events are separated by a longer time .
to tackle this problem , hochreiter and schmidhuber proposed long short term memory , which uses a cell with input , forget and output gates to prevent the vanishing gradient problem .
classifier we use the l2-regularized logistic regression from the liblinear package , which we accessed through weka .
we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts .
relation extraction is the task of finding semantic relations between entities from text .
relation extraction is a fundamental task in information extraction .
the log-linear feature weights are tuned with minimum error rate training on bleu .
feature weights are tuned using minimum error rate training on the 455 provided references .
headden , johnson and mcclosky introduced the extended valence grammar and added lexicalization and smoothing .
headden iii et al introduce the extended valence grammar and add lexicalization and smoothing .
chelba and acero use the parameters of the source domain maximum entropy classifier as the means of a gaussian prior when training a new model on the target data .
chelba and acero use the parameters of the maximum entropy model learned from the source domain as the means of a gaussian prior when training a new model on the target data .
table 4 shows the comparison of the performances on bleu metric .
the table also shows the popular bleu and nist 2 mt metrics .
then we train word2vec to represent each entity with a 100-dimensional embedding vector .
we then used word2vec to train word embeddings with 512 dimensions on each of the prepared corpora .
by casting pseudo-word searching problem into a parsing framework , we search for pseudowords .
by casting pseudo-word searching problem into a parsing framework , we search for pseudowords in polynomial time .
in contrast to previous statistical learning approaches , we directly translate math word problems .
in contrast to these approaches , we study the feasibility of applying deep learning to the task of math word problem solving .
in this paper , we propose a method to jointly model and exploit the context compatibility , the topic .
in this paper , we propose a generative model ¨c called entity-topic model , to effectively join the above two complementary directions together .
the vectors are given by a word2vec model and a glove model trained on german data .
the vectors can be pretrained by neural language models .
in , kwon et al drew a two-dimensional plot of 59 features ranked by means of forward selection and backward elimination .
in , kwon et al drawled a two-dimensional plot of 59 features ranked by forward selection and backward elimination .
for the language model , we used srilm with modified kneser-ney smoothing .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .
relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .
parameters do produce useful models of student learning .
user affect parameters can increase the usefulness of these models .
in this study , we focus on the problem of cross-lingual sentiment classification , which leverages only english training data for supervised sentiment classification of chinese product reviews .
in this study , we focus on improving the corpus-based method for cross-lingual sentiment classification of chinese product reviews by developing novel approaches .
the expectationmaximization algorithm can be used to train probabilities if the state behaviour is fixed .
for unsupervised learning one can consider the labels as missing data and estimate their values using the expectation maximization algorithm .
we used word2vec to convert each word in the world state , query to its vector representation .
we initialize our word vectors with 300-dimensional word2vec word embeddings .
plagiarism is a very significant problem nowadays , specifically in higher education institutions .
plagiarism is a major issue in science and education .
sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) .
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
coreference resolution is a well known clustering task in natural language processing .
dependency parsing is a crucial component of many natural language processing systems , for tasks such as text classification ( o ? zgu ? r and gu ? ngo ? r , 2010 ) , statistical machine translation ( cite-p-13-3-0 ) , relation extraction ( cite-p-13-1-1 ) , and question answering ( cite-p-13-1-3 ) .
dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14 .
the similarity-based model showed error rates down to 0 . 16 , far lower than both em-based clustering and resnik ’ s wordnet model .
in the evaluation , the similarity-model shows lower error rates than both resnik ’ s wordnet-based model and the em-based clustering model .
the target-side language models were estimated using the srilm toolkit .
the language models were trained using srilm toolkit .
semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) .
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence .
we apply the stanford coreference resolution system .
we use the stanford rule-based system for coreference resolution .
neural models , with various neural architectures , have recently achieved great success .
recently , neural networks become popular for natural language processing .
with regard to surface realisation , decisions are often made according to a language model of the domain .
surface realisation decisions in a natural language generation system are often made according to a language model of the domain .