sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
in two experiments , we demonstrate that ( 1 ) our construction process accurately associates a novel sense with its correct hypernym and ( 2 ) the resulting resource has an immediate benefit for existing wordnet-based applications .
in two experiments we demonstrated that the c rown construction process is accurate and that the resulting resource has a real benefit to wordnet-based applications .
patwardhan and riloff presented an information extraction system that find relevant regions of text and applies extraction patterns within those regions .
patwardhan and riloff presented an information extraction system that finds relevant regions of text and applies extraction patterns within those regions .
from this perspective , our model can be seen as a proof of concept that it is possible to have rich feature-based conditioning .
we consider our model as a proof of concept that probabilistic structure-building models can include rich featural interactions .
word alignment is the problem of annotating parallel text with translational correspondence .
word alignment is a critical first step for building statistical machine translation systems .
from a corpus of 1 . 4m sentences , we learn about 250k simple propositions about american football in the form of predicate-argument structures .
we use an unsupervised model to infer domain-specific classes from a corpus of 1.4m unlabeled sentences , and applied them to learn 250k propositions about american football .
sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .
we transfer the parameters corresponding to observations to initialize the training process .
our method transfers observation parameters trained on clustered text to initialize the training process .
djuric et al leveraged word embedding representations to improve machine learning based classifiers .
mikolov et al proposed a distributed word embedding model that allowed to convey meaningful information on vectors derived from neural networks .
we reparsed the sentences using the charniak and johnson parser rather than using the gold-parses that ge marked up .
we used the first-stage parser of charniak and johnson for english and bitpar for german .
part of this proposal is concerned with the efficient discovery of web documents for a particular domain .
the first part of this proposal is concerned with the efficient discovery of publications in the web for a particular domain .
semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic ) .
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) .
algorithm is a crucial part in statistical machine translation .
decoding algorithm is a crucial part in statistical machine translation .
we use the berkeley parser to parse all of the data .
we parsed all sentences with the berkeley parser .
the trigram language model is implemented in the srilm toolkit .
we implement an in-domain language model using the sri language modeling toolkit .
this paper proposes a technique for inserting linefeeds into transcribed texts of japanese monologue speech .
this paper proposed a method for inserting linefeeds into discourse speech data .
statistical machine translation systems employ a word-based alignment model .
most statistical machine translation systems employ a word-based alignment model .
using word2vec , we compute word embeddings for our text corpus .
we use word2vec to map words in our source and target corpora to ndimensional vectors .
lemmatization is the process to determine the root/dictionary form of a surface word .
lemmatization is the process of determining the dictionary form of a word ( e.g . swim ) given one of its inflected variants ( e.g . swims , swimming , swam , swum ) .
we use the pre-trained word2vec embeddings provided by mikolov et al as model input .
we also obtain the embeddings of each word from word2vec .
at the sentence / segment level has turned out far more challenging than corpus / system level .
sentence level evaluation in mt has turned out far more difficult than corpus level evaluation .
dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation .
dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) .
twitter is a microblogging service that has 313 million monthly active users 1 .
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .
sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer .
for such knowledge , their evaluation methodology has been problematic , hindering further research .
however , evaluation of such knowledge has been problematic , hindering further developments .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
word2vec is the method to obtain distributed representations for a word by using neural networks with one hidden layer .
word2vec is a group of shallow neural networks generating representations of words in a continuous vector space depending on contexts they appear in .
our system currently works with the attentive qalstm model .
we use a seq2seq model with soft attention as our qg model .
semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles .
with shared parameters , the model is able to learn a general way to act in slots , increasing its scalability to large domains .
this , combined with an information sharing mechanism between slots , increases the scalability to large domains .
we use the moses toolkit to train our phrase-based smt models .
we implemented our method in a phrase-based smt system .
automatically can be performed by generating a list of keyphrase candidates , ranking these candidates , and selecting the top-ranked candidates as keyphrases .
keyphrases can be extracted automatically by generating a list of keyphrase candidates , ranking these candidates , and selecting the top-ranked candidates as keyphrases .
for word embeddings , we consider word2vec and glove .
our word embeddings is initialized with 100-dimensional glove word embeddings .
l slda also can be viewed as a sentiment-informed multilingual word sense disambiguation ( wsd ) algorithm .
m l slda also can be viewed as a sentiment-informed multilingual word sense disambiguation ( wsd ) algorithm .
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens .
waseem et al propose breaking abusive language identification into further subtasks .
waseem et al proposed a typology for various sub-types of abusive language .
we have developed willex , a tool that helps grammar developers to work efficiently .
we developed a debug tool , willex , which uses xml tagged corpora and outputs information of grammar defects .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the weights used during the reranking are tuned using the minimum error rate training algorithm .
the feature weights of the translation system are tuned with the standard minimum-error-ratetraining to maximize the systems bleu score on the development set .
the minimum error rate training procedure is used for tuning the model parameters of the translation system .
math-w-4-4-0-24 and math-w-4-4-0-27 represent the number of entities .
h i¡í and math-w-2-7-0-62 are projected vectors of entities .
we use long shortterm memory networks to build another semanticsbased sentence representation .
we adopt a long short-term memory network for the word-level and sentence-level feature extraction .
we evaluate all models on the semeval lexical substitution task test set .
to do this we examine the dataset created for the english lexical substitution task in semeval .
our 5-gram language model is trained by the sri language modeling toolkit .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
we used the moses pbsmt system for all of our mt experiments .
for all experiments , we used the moses smt system .
implementing this model is currently under development , within an incremental approach .
a prototype , stk , partially implementing the model , is currently under development , within an incremental approach .
coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity .
coreference resolution is the next step on the way towards discourse understanding .
we also use glove vectors to initialize the word embedding matrix in the caption embedding module .
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
all the weights of those features are tuned by using minimal error rate training .
the component features are weighted to minimize a translation error criterion on a development set .
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .
these models were implemented using the package scikit-learn .
the standard classifiers are implemented with scikit-learn .
results were obtained by training and evaluating each system on the full wsj portion of the penn treebank corpus .
all four algorithms were compared on two domains taken from the penn treebank annotated corpus .
this paper describes an automated system for assigning quality .
this work describes an automated quality-monitoring system that addresses these problems .
we use the skll and scikit-learn toolkits .
we implemented linear models with the scikit learn package .
to cope with this problem we use the concept of class proposed for a word n-gram model .
to avoid this problem we use the concept of class proposed for a word n-gram model .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options .
in this paper we explore the capabilities of a disambiguation algorithm .
the disambiguation algorithm presented in this paper is implemented in semlinker , an entity linking system .
newman et al found that aggregate pairwise pmi scores over the top-n topic words correlated well with human ratings .
newman , et al surveyed a number of similarity metrics and found that mean point-wise mutual information correlated best to human judgements .
however , opinion frames was difficult to be implemented because the recognition of opinion target was very challenging .
however , opinion frames was difficult to be implemented because the recognition of opinion target was very challenging in general text .
socher et al , 2012 ) uses a recursive neural network in relation extraction .
socher et al train a composition function using a neural network-however their method requires annotated data .
we used europarl and wikipedia as parallel resources and all of the finnish data available from wmt to train five-gram language models with srilm and kenlm .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
we measured the overall translation quality with the help of 4-gram bleu , which was computed on tokenized and lowercased data for both systems .
we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems .
the model weights were trained using the minimum error rate training algorithm .
the model parameters are trained using minimum error-rate training .
we propose using reservoir sampling in the rejuvenation step to reduce the storage complexity of the particle filter .
we have proposed reservoir sampling for reducing the storage complexity of a particle filter from linear to constant .
we then lowercase all data and use all unique headlines in the training data to train a language model with the srilm toolkit .
we train an english language model on the whole training set using the srilm toolkit and train mt models mainly on a 10k sentence pair subset of the acl training set .
we used latent dirichlet allocation to perform the classification .
for this feature , we use the latent dirichlet allocation .
in order to limit the size of the vocabulary of the unmt model , we segmented tokens in the training data into sub-word units via byte pair encoding .
in order to limit the size of the vocabulary of the nmt models , we segmented tokens in the parallel data into sub-word units via byte pair encoding using 30k operations .
present paper proposes a method by which to translate outputs of a robust hpsg parser into semantic representations of typed dynamic logic ( tdl ) , a dynamic plural .
the present paper proposed a method by which to translate hpsg-style outputs of a robust parser ( cite-p-13-1-9 ) into dynamic semantic representations of tdl ( cite-p-13-1-1 ) .
we introduce a new dataset for tv show recap extraction .
we present a new dataset , tvrecap , for text recap extraction on tv shows .
in this paper , we describe an improved method for combining partial captions into a final output .
in this paper , we have introduced a new a ∗ search based msa algorithm for aligning partial captions into a final output stream in real-time .
distributional semantics is based on the theory that semantically similar words occur within the same textual contexts .
word embeddings are usually trained assuming that semantically-similar words occur within the same textual contexts .
understanding of irony often relies on context .
in online discourse , examples of irony are very common .
we use the pre-trained glove vectors to initialize word embeddings .
our word embeddings is initialized with 100-dimensional glove word embeddings .
for this task , we used the svm implementation provided with the python scikit-learn module .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
lui and cook , 2013 , present a dialect classification approach to identify australian , british , and canadian english .
lui and cook , 2013 , studies on english dialect identification and presents serveral classification approaches to classify australia , british and caniadian english .
then , we trained word embeddings using word2vec .
we use 300 dimension word2vec word embeddings for the experiments .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we build a trigram language model per prompt for the english data using the srilm toolkit and measure the perplexity of translated german answers under that language model .
contributions combined significantly improves unlabeled dependency accuracy : 90 . 82 % to 92 . 13 % .
the two contributions together significantly improves unlabeled dependency accuracy from 90.82 % to 92.13 % .
when features such as part-of-speech tags are used , as in the work of jarvis et al , the method relies on a part-of-speech tagger which might not be available for some languages .
when features such as part-of-speech tags are used , as in the work of jarvis , bestgen , and pepper , the method relies on a part-ofspeech tagger that might not be available for some languages .
we use the moses package for this purpose , which uses a phrase-based approach by combining a translation model and a language model to generate paraphrases .
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality .
in this paper , we introduce risk mining , which is the task of identifying a set of risks .
in this paper , we introduced the task of risk mining , which produces patterns that are useful in another task , risk alerting .
semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot .
semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures .
neural network-based models have achieved impressive improvements over traditional back-off n-gram models .
minimum translation unit models based on recurrent neural networks lead to substantial gains over their classical n-gram back-off models .
table 2 shows a comparison of our extraction performance to that of kozareva .
table 6 shows a performance comparison of our system to that of kozareva et al and that of wang and cohen .
the translation results are evaluated with case insensitive 4-gram bleu .
translation quality is evaluated by case-insensitive bleu-4 metric .
we report the mt performance using the original bleu metric .
we evaluated the system using bleu score on the test set .
in this paper , we propose to use a hierarchical bidirectional long short-term memory ( bi-lstm ) network .
in this paper , we propose an attention-based hierarchical neural network for discourse parsing .
named entity ( ne ) transliteration is the process of transcribing a ne from a source language to a target language based on phonetic similarity between the entities .
named entity ( ne ) transliteration is the process of transcribing a ne from a source language to some target language based on phonetic similarity between the entities .
in general , text classification is a multi-class problem ( more than 2 categories ) .
in general text classification is a standard tool for managing large document collections .
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
we use the cbow model for the bilingual word embedding learning .
we use pretrained 300-dimensional english word embeddings .
for a subset of the wsj treebank , this evaluation reaches 79 % f-score .
our parser plus stochastic disambiguator achieves 79 % f-score under this evaluation regime .
they learned text embeddings using the neural language model from le and mikolov and used them to train a binary classifier .
le and mikolov extended the word embedding learning model by incorporating paragraph information .
sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
we use the group average agglomerative clustering package within nltk .
we use the nltk stopwords corpus to identify function words .
we make use of moses toolkit for this paradigm .
we obtained a phrase table out of this data using the moses toolkit .
socher et al later introduced the recursive neural network architecture for supervised learning tasks such as syntactic parsing and sentiment analysis .
socher et al introduced a family of recursive neural networks to represent sentence-level semantic composition .
we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling .
we use large 300-dim skip gram vectors with bag-of-words contexts and negative sampling , pre-trained on the 100b google news corpus .
in this paper , we focus on enhancing the expressive power of the modeling , which is independent of the research of enhancing translation .
in this paper , we propose a non-linear modeling of translation hypotheses based on neural networks .
in particular , we define an efficient tree kernel derived from the partial tree kernel , suitable for encoding structural representation of comments into support vector machines .
more precisely , we define several dependency trees exploitable by the partial tree kernel and compared them with stk over constituency trees .
su et al conduct translation model adaptation with monolingual topic information .
su et al use the topic distribution of in-domain monolingual corpus to adapt the translation model .