sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we use a maximum entropy classifier which allows an efficient combination of many overlapping features .
for each connective we built a specialized classifier , by using the stanford maximum entropy classifier package .
experiments show that our system was able to outperform other logic-based systems .
the results of our experiments on two datasets show that our system was able to outperform other logic-based systems .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
our experiments are conducted with the dialogue state tracking challenge 2 dataset , which is on restaurant information domain .
as for data set , we use the dialogue state tracking challenge 2 dataset , which is in a restaurant information domain .
in this work , we present a simple method to extend an existing ccg parser to parse a set of sentences .
in this work , we solve the inconsistency problem above by adapting the inter-sentence model of cite-p-12-3-3 to ccg parsing .
the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set .
the nnlm weights are optimized as the other feature weights using minimum error rate training .
the experimental results reveal that our approach achieves significant improvement .
experimental results show that the proposed approach consistently achieves great success .
vector space models represent the meaning of a target word as a vector in a high-dimensional space .
vector space models of word meaning represent words as points in a highdimensional semantic space .
the model uses non-negative matrix factorization in order to find latent dimensions .
our model uses non-negative matrix factorization -nmf in order to find latent dimensions .
recently , the field has been influenced by the success of neural language models .
this approach has already been used with great success in the domain of language models .
information extraction ( ie ) is the task of extracting information from natural language texts to fill a database record following a structure called a template .
information extraction ( ie ) is the process of finding relevant entities and their relationships within textual documents .
collobert et al adjust the feature embeddings according to the specific task in a deep neural network architecture .
collobert et al employ a cnn-crf structure , which obtains competitive results to statistical models .
the data to be annotated in wssim-1 were taken primarily from semcor and the senseval-3 english lexical sample .
the sentences that we use from the gws dataset were originally extracted from the english senseval-3 lexical sample task .
the parameters for each phrase table were tuned separately using minimum error rate training .
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
we compare our method with previous work on sentiment classification using standard svm .
we use the same set of binary features as in previous work on this dataset .
evaluation results show consistent improvements over the raw first-stage mt system output .
evaluation results show significant improvements over the first-stage raw mt system .
in recent years , log-linear model has been a mainstream method to formulate statistical models for machine translation .
the phrase-based translation approach has been the popular and widely used strategy to the statistical machine translation since och , et al proposed the log-linear model .
an english 5-gram language model is trained using kenlm on the gigaword corpus .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
with a powerful customizable design , the association cloud platform can be adapted to any specific domains .
with a powerful customizable design , the association cloud platform can be adapted to any specific domains including complex specialized terms .
to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .
in this run , we use a sentence vector derived from word embeddings obtained from word2vec .
in this paper is to generate an extractive summary ( usually , we will simply say summary ) from its citation summary .
the problem we tackle in this paper is to generate an extractive summary ( usually , we will simply say summary ) from its citation summary .
feature weights are tuned using minimum error rate training on the 455 provided references .
feature weights were set with minimum error rate training on a development set using bleu as the objective function .
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized .
tsuruoka and tsujii proposed a bidirectional pos tagger , in which the order of inference is handled with the easiest-first heuristic .
tsuruoka and tsujii proposed the easiest-first approach which greatly reduced the computation complexity of inference while maintaining the accuracy on labeling .
common training criteria include the maximum likelihood , averaged structured perceptron , and max-margin .
other training criteria , such as maximum likelihood or max-margin , could also be employed .
update summarization is a form of multi-document summarization where a document set must be summarized in the context of other documents assumed to be known .
update summarization is a new challenge in multi-document summarization focusing on summarizing a set of recent documents relatively to another set of earlier documents .
in this paper , we present an unsupervised model to automatically extrapolate text recaps of tv shows .
in this paper , we explore a new problem of text recap extraction for tv shows .
yu and hatzivassiloglou have reported a similarity based method using words , phrases and wordnet synsets for sentiment sentence extraction .
yu and hatzivassiloglou identified the polarity of opinion sentences using semantically oriented words .
we employ support vector machines to perform the classification .
as a supervised classifier , we use support vector machines with a linear kernel ) .
coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity .
coreference resolution is the task of determining which mentions in a text refer to the same entity .
relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
recently , vaswani et al proposed a model called transformer , which completely relies on attention and feed-forward layers instead of rnn architecture .
recently , vaswani et al , propose a novel sequenceto-sequence generation network , the transformer , which is entirely based on attention .
systems using content-based filtering use the content information of recommendation items .
content based recommendation systems use the textual information of news articles and user generated content to rank items .
socher et al and socher et al present a framework based on recursive neural networks that learns vector space representations for multi-word phrases and sentences .
socher et al utilized parsing to model the hierarchical structure of sentences and uses unfolding recursive autoencoders to learn representations for single words and phrases acting as nonleaf nodes in the tree .
for the implementation of discriminative sequential model , we chose the wapiti 4 toolkit .
we used wapiti , which is a simple and fast discriminative sequence labeling toolkit , to train the sequential models .
similar to chen et al , we use uncertaintybased sampling but combine it with an svm model .
in contrast to chen et al , we opt for simple , readily available features derived from cooccurrences .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
the approach described here makes use of a neural network algorithm that is typically used to generate word embeddings .
due to the success of word embeddings in word similarity judgment tasks , this work also makes use of global vector word embeddings .
using a large monolingual corpus , we train a word-embedding space e n of dimensionality n for all words in v using the skipgram model .
based on the distributional hypothesis , we train a skip-gram model to learn the distributional representations of words in a large corpus .
coreference resolution is the process of linking together multiple expressions of a given entity .
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity .
we use a support vector machine -based chunker yamcha for the chunking process .
we used the chunker yamcha , which is based on support vector machines .
we describe b aye s um , an algorithm for performing query-focused summarization .
we present b aye s um ( for “ bayesian summarization ” ) , a model for sentence extraction in query-focused summarization .
we took up to 40 test examples for each target word ( some words had fewer test examples ) , yielding 913 test examples .
for the test set we took up to 40 test examples for each target word ( some words had fewer test examples ) , yielding 913 test examples in total , out of which 239 were positive .
annotation of conversation can power adaptive intervention in collaborative learning settings .
automated annotation of social behavior in conversation is necessary for large-scale analysis of real-world conversational data .
in this paper , we proposed an integration of distanced n-grams into the original dclm model .
in , the dclm model was proposed to tackle the data sparseness and to extract the large-span information for the n-gram model .
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .
relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .
extensive experiments have leveraged word embeddings to find general semantic relations .
recent works on word embedding show improvements in capturing semantic features of the words .
irony is a profoundly pragmatic and versatile linguistic phenomenon .
irony is a particular type of figurative language in which the meaning is often the opposite of what is literally said and is not always evident without context or existing knowledge .
blei and lafferty defined correlated topic models by replacing the dirichlet in latent dirichlet allocation models with a ln distribution .
brody and lapata extend the latent dirichlet allocation model to combine evidence from different types of contexts .
tang et al designed a deep memory network which consisted of multiple computational layers , each of which was an attention model over an external memory .
tang et al proposed a deep memory network with multiple attention-based computational layers to improve the performance .
we evaluate our approach on the english portion of the conll-2012 dataset .
we train and evaluate our model on the english corpus of the conll-2012 shared task .
chiang gives a good introduction to stsg , which originate from the syntax-directed translation schemes of aho and ullman .
chiang and knight gives a good introduction to stsgs , which originate from the syntax-directed translation schemes of aho and ullman .
in the task-based evaluation , the enriched model derived from the triples of background knowledge performs better by 3 . 02 % , which demonstrates the effectiveness of our framework .
moreover , we conduct a task-based evaluation by incorporating these triples as additional features into document classification and enhances the performance by 3.02 % .
we trained the statistical phrase-based systems using the moses toolkit with mert tuning .
we trained a phrase-based smt engine to translate known words and phrases using the training tools available with moses .
the feature extractor 蠁 is a multi-layer perceptron over token embeddings , initialized by pre-trained word2vec vectors .
the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model .
following , we use the word analogical reasoning task to evaluate the quality of word embeddings .
we use the skipgram model with negative sampling implemented in the open-source word2vec toolkit to learn word representations .
co-training model can learn a performance-driven data selection policy to select high-quality unlabeled data .
the q-agent in our model can learn a good data selection policy to select high-quality unlabeled data for co-training .
this corpus was compiled at the university of twente and subsequently parsed by the alpino parser at the university of groningen .
it was compiled at the university of twente and later parsed by the alpino parser at the university of groningen .
we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit .
finally , we extract the semantic phrase table from the augmented aligned corpora using the moses toolkit .
three out of the four categories of features can be inferred from an image-question pair .
we proposed four different categories of auxiliary features , three of which can be inferred from an image-question pair .
automatic text summarization is a seminal problem in information retrieval and natural language processing ( luhn , 1958 ; baxendale , 1958 ; edmundson , 1969 ) .
automatic text summarization is a rapidly developing field in computational linguistics .
word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .
in an enc – dec model , a long input sequence results in performance degradation due to loss of information in the front portion of the input sequence .
the rnn encoder–decoder model suffers from poor performance when the length of the input sequence is long .
sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment .
sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text .
classifier we use the l2-regularized logistic regression from the liblinear package , which we accessed through weka .
we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package .
for word embeddings , we consider word2vec and glove .
we use pre-trained glove vector for initialization of word embeddings .
in reasoning about ( 12 ) , r can attribute to q the belief expressed in ( 19 ) , combined with a belief that kathy will be at the hospital at time .
in reasoning about ( 12 ) , r can attribute to q the belief expressed in ( 19 ) , combined with a belief that kathy will be at the hospital at time t2 .
however , the classical algorithm by dale and haddock was recently shown to be unable to generate satisfying res in practice .
however , the classical algorithm by dale and haddock was shown to be unable to generate satisfying res in practice , .
transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word ’ s phonological equivalent .
phonetic translation across these pairs is called transliteration .
the target-side language models were estimated using the srilm toolkit .
the language model was trained using srilm toolkit .
we use a minibatch stochastic gradient descent algorithm together with the adam optimizer .
to train the network , we make use of stochastic gradient descent and the adam optimization algorithm .
in order to measure translation quality , we use bleu 7 and ter scores .
to evaluate the evidence span identification , we calculate f-measure on words , and bleu and rouge .
information extraction ( ie ) is the nlp field of research that is concerned with obtaining structured information from unstructured text .
information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks .
chinese is a language without natural word delimiters .
more importantly , chinese is a language that lacks the morphological clues that help determine the pos tag of a word .
based on the derived hierarchy , we generate a hierarchical organization of consumer reviews on various product aspects .
based on the derived hierarchy , we can generate a hierarchical organization of consumer reviews as well as consumer opinions on the aspects .
on the previously studied special case of single object reference , we achieve state-of-the-art performance , with over 35 % relative error reduction over previous state of the art .
additionally , on the previously studied special case of single object reference , we show a 35 % relative error reduction over previous state of the art .
for the decoder , we use a recurrent neural network language model , which is widely used in language generation tasks .
we model the generative architecture with a recurrent language model based on a recurrent neural network .
in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization .
for the optimization process , we apply the diagonal variant of adagrad with mini-batches .
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing .
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus .
table 4 shows translation results in terms of bleu , ribes , and ter .
table 4 shows the bleu scores of the output descriptions .
blitzer et al apply structural correspondence learning for learning pivot features to increase accuracy in the target domain .
blitzer et al proposed structural correspondence learning to identify the correspondences among features between different domains via the concept of pivot features .
self-disclosure , the act of revealing oneself to others , is an important social behavior .
self-disclosure is an important and pervasive social behavior .
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality .
we use moses , a statistical machine translation system that allows training of translation models .
for example , knight and graehl employ cascaded probabilistic finite-state transducers , one of the stages modeling the orthographicto-phonetic mapping .
for example , knight and graehl address the problem through cascaded finite state transducers , with explicit representations of the phonetics .
in this paper , we propose a new approach based on the skipgram model , where each word is represented as a bag of character .
in this paper , we investigate a simple method to learn word representations by taking into account subword information .
experiments on english ¨c chinese and english ¨c french show that our approach is significantly better than previous combination methods , including sentence-level constrained translation .
experiments on english¨cchinese and english¨c french show that compared with previous combination methods , our approach produces significantly better translation results .
in their model , citing articles “ vote ” on each cited article ’ s topic distribution .
in their model , citing articles “ vote ” on each cited article ’ s topic distribution in retrospect , via a network flow model .
similarity between their hidden representations shows comparable performance with the state-of-the-art supervised models and in some cases outperforms them .
furthermore , the unsupervised version of our autoencoder show comparable performance with the supervised baseline models and in some cases outperforms them .
in this section we concentrate on some unsupervised methods .
in this section we concentrate on some unsupervised methods as related works .
finally , we can plug the acquired list of closed-class words into a minimally supervised tagging system , which requires the input of such a lexicon only .
finally , we plug this newly acquired closed-class lexicon into a minimally supervised tagging system , which requires as input exactly such a lexicon .
substitution and feature structures for tags .
stitution , and feature structure representation for tags .
we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments .
we implement logistic regression with scikit-learn and use the lbfgs solver .
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
coreference resolution is the process of linking together multiple expressions of a given entity .
we use the standard stanford-style set of dependency labels .
we use the collapsed tree formalism of the stanford dependency parser .
we use the svm implementation available in the li-blinear package .
we use the liblinear tool as our svm implementation .
for word embedding , we used pre-trained glove word vectors with 300 dimensions , and froze them during training .
we also used pre-trained word embeddings , including glove and 300d fasttext vectors .
to the best of our knowledge , this is the first work of using dnn technology for automatic math word problem solving .
this is the first work of applying deep learning technologies to math word problem solving .
transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language .
transliteration is the conversion of a text from one script to another .
in this paper we present s up wsd , whose objective is to overcome the aforementioned drawbacks , and facilitate the use of a supervised wsd software .
in this demonstration we present s up -wsd , a java api for supervised word sense disambiguation ( wsd ) .