sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
as word vectors the authors use word2vec embeddings trained with the skip-gram model .
we use skipgram model to train the embeddings on review texts for k-means clustering .
as a result of this , dependency annotation for hindi is based on paninian framework for building the treebank .
dependency annotation for hindi is based on paninian framework for building the treebank .
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing .
for our baseline , we used a small parallel corpus of 30k english-spanish sentences from the europarl corpus .
for our spanish experiments , we randomly sample 2 , 000 sentence pairs from the spanish-english europarl v5 parallel corpus .
to train our model we use markov chain monte carlo sampling .
we perform inference using point-wise gibbs sampling .
the 'grammar ' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the 'head ' .
this grammar consists of a lexicon which pairs words or phrases with regular expression functions .
for english we used partof-speech tags obtained with treetagger .
we used the treetagger for lemmatisation as well as part-of-speech tagging .
for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus .
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
for the language model , we used srilm with modified kneser-ney smoothing .
for language models , we use the srilm linear interpolation feature .
we use mt02 as the development set 4 for minimum error rate training .
we used minimum error rate training to tune the feature weights for maximum bleu on the development set .
we have designed the features used by our readability metric based on the cognitive aspects of our target users .
we also plan on refining our cognitively motivated features for measuring the difficulty of a text for our users .
we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
finkel et al suggested crf to train the model for parsing english .
finkel et al used this approach to speed up training of a log-linear model for parsing .
in this paper goes in the same direction : we are interested in exploiting the output structure at the thread level to make more consistent global assignments .
in particular , we focus on exploiting the output structure at the thread level in order to make more consistent global decisions .
this modification has been shown to improve the performance of both lesk and second-order vectors on the tasks of word sense disambiguation and semantic similarity .
it has been applied to both word sense disambiguation and semantic similarity , and generally found to improve on original lesk .
that yields an interpretation that is conceptually simple , motivated by the preservation of monotonicity , and is computationally no harder than the original rounds-kasper logic .
this is an interpretation of negation that is intuitively appealing , formally simple , and computationally rto harder than the original rounds-kasper logic .
tree kernels are very effective in capturing the cross-lingual structural similarity .
this is motivated by the decent effectiveness of tree kernels in expressing the similarity between tree structures .
the n-gram model was a 3-gram model with kneser-ney smoothing trained using kenlm with its default settings .
the language models used are 5-gram kenlm models with singleton tri-gram pruning and trained with modified interpolated kneser-ney smoothing .
we propose several strategies to acquire pseudo cfgs only from dependency annotations .
to overcome this limitation , we propose several strategies to acquire pseudo grammars only from dependency annotations .
we perform chinese word segmentation , pos tagging , and dependency parsing for the chinese sentences with stanford corenlp .
to obtain our base representation we parse the sentences using the stanford corenlp suite which can provide both phrase-structure and sentiment annotation .
the model weights are automatically tuned using minimum error rate training .
the decoding weights are optimized with minimum error rate training to maximize bleu scores .
recently , deep learning structures such as cnns and lstms have been used to extract high-level features .
more recently , deep learning was used to extract higher-level multimodal features .
word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .
for the character-based model we use publicly available pre-trained character embeddings 3 de- rived from glove vectors trained on common crawl .
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
for all classifiers , we used the scikit-learn implementation .
we trained the five classifiers using the svm implementation in scikit-learn .
this approach relies on word embeddings for the computation of semantic relatedness with word2vec .
the distributed word representation by word2vec factors word distance and captures semantic similarities through vector arithmetic .
elfardy and diab proposed a supervised method for identifying whether a given sentence in prevalently msa or egyptian using the arabic online commentary dataset .
they later proposed a supervised approach for identifying whether a given sentence is prevalently msa or egyptian using the arabic online commentary dataset .
we use the sri language modeling toolkit for language modeling .
we used the sri language modeling toolkit with kneser-kney smoothing .
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
for this , we utilize the publicly available glove 1 word embeddings , specifically ones trained on the common crawl dataset .
dyer et al propose a stack-lstm for transition-based parsing .
dyer et al introduced the stack lstm to promote the transition-based parsing .
yates and etzioni proposed a simple probabilistic method for identifying open ie triples which has a similar meaning .
transitivity constraints were also enforced by yates and etzioni , who proposed a clustering algorithm for learning undirected synonymy relations .
all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training .
the log-linear parameter weights are tuned with mert on a development set to produce the baseline system .
word spacing is one of the important tasks in korean information processing .
an automatic word spacing is one of the important tasks in korean language processing and information retrieval .
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) .
semantic parsing is the task of mapping natural language to machine interpretable meaning representations .
ambiguity is a central issue in natural language processing .
ambiguity is the task of building up multiple alternative linguistic structures for a single input ( cite-p-13-1-8 ) .
for shorter hypotheses , we introduced a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal .
to counter neural generation¡¯s tendency for shorter hypotheses , we also introduce a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal .
correa and sureka , finally , found that compared to closed questions , deleted questions had a slightly higher number of characters in the question body .
the current results thus are mostly in line with the findings of correa and sureka who found that deleted questions have a higher number of characters in the question body than closed questions .
a 4-gram language model was trained on the monolingual data by the srilm toolkit .
trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
charniak et al , 1996 ) performes a comparison of single tagging to multi-tagging .
charniak et al investigated multi-pos tagging in the context of pcfg parsing .
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation .
translation performance is measured using the automatic bleu metric , on one reference translation .
pattern clusters can be used to recognize new examples of the same relationships .
pattern clusters can be used to extract instances of the corresponding relationships .
we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
davidov et al utilize hashtags and smileys to build a largescale annotated tweet dataset automatically .
davidov et al propose utilizing twitter hashtag and smileys to learn enhanced sentiment types .
bendersky et al , also used top search results to generate structured annotation of queries .
bendersky et al proposed a joint framework for annotating queries with pos tags and phrase chunks .
we implement the weight tuning component according to the minimum error rate training method .
we perform the mert training to tune the optimal feature weights on the development set .
twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products .
twitter is a microblogging service that has 313 million monthly active users 1 .
we propose a new model to drop the independence assumption , by instead modelling correlations between translation decisions , which we use to induce translation .
we propose a new model to address this imbalance , based on a word-based markov model of translation which generates target translations leftto-right .
study suggest that while sophisticated coherence models can potentially contribute to disentanglement , they would greatly from improved low-level resources for internet chat .
the results of this study suggest that topic models can help with disentanglement , but that it is to useful topics for irc chat .
although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .
coreference resolution is the task of determining when two textual mentions name the same individual .
this paper reports about our systems in semeval2 japanese word sense disambiguation ( wsd ) task .
the paper reports the participating systems in semeval-2 japanese wsd task .
cao and li , 2002 ) proposed a method of compositional translation estimation for compounds .
cao and li , 2002 , also proposed a method of compositional translation estimation for compounds .
for the translation and target language model , an improvement of 2 . 5 bleu on the development data and 1 . 5 bleu on the test data was observed .
using a continuous space model for the translation model and the target language model , an improvement of 1.5 bleu on the test data is observed .
named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on .
named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text .
hu et al enabled a neural network to learn simultaneously from labeled instances as well as logic rules .
hu et al , 2016 , explored a distillation framework that transfers structured knowledge coded as logic rules into the weights of neural networks .
twitter is a very popular micro blogging site .
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers .
we choose modified kneser ney as the smoothing algorithm when learning the ngram model .
as a countbased baseline , we use modified kneser-ney as implemented in kenlm .
when applied to the thread reconstruction task , our model achieves promising results .
we also show a novel application of our model in forum thread reconstruction .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences .
following , we use the bootstrap resampling test to do significance testing .
we report bleu and ter evaluation scores .
we report meteor and sentence level bleu-4 scores .
from the review fragments , we develop a multi-criteria optimization approach for answer generation by simultaneously taking into account review .
we have further formulated the answer generation from retrieved review sentences as a multi-criteria optimization problem .
the decoding weights were optimized with minimum error rate training .
the weights of the different feature functions were optimised by means of minimum error rate training .
in this paper , we improve domain-specific word alignment through statistical alignment .
this paper proposes an alignment adaptation approach to improve domain-specific ( in-domain ) word alignment .
zeng et al exploit a convolutional neural network to extract lexical and sentence level features for relation classification .
zeng et al introduce a convolutional neural network to extract relational facts with automatically learning features from text .
into the learning-based system yields a minor improvement over the rule-based system .
particularly , the learning-based system enriched with more features does not yield much improvement over the rule-based system .
the latter approach represents word contexts as vectors in some space and uses similarity measures and automatic clustering in that space .
the latter represents word contexts as vectors in some space and use similarity measures and automatic clustering in that space .
in this paper , we describe a method for assessing student answers , modeled as a paraphrase identification problem .
this can be seen as a paraphrase identification problem between student answers and reference answers .
to evaluate the quality of the generated summaries , we compare our dtm-based comparative summarization methods with five other typical methods under rouge metrics .
similar to the evaluation for traditional summarization tasks , we use the rouge metrics to automatically evaluate the quality of produced summaries given the goldstandard reference news .
word alignment is the task of identifying word correspondences between parallel sentence pairs .
word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .
we ran mt experiments using the moses phrase-based translation system .
for all experiments , we used the moses smt system .
word embeddings are initialized with 300d glove vectors and are not fine-tuned during training .
the word-embeddings were initialized using the glove 300-dimensions pre-trained embeddings and were kept fixed during training .
the task of ne extraction of the irex workshop is to recognize eight ne categories in table 1 .
the task of ne extraction of the irex workshop is to recognize eight ne types in table 1 .
bleu is a precision based measure and uses n-gram match counts up to order n to determine the quality of a given translation .
bleu is one of the most popular metrics for automatic evaluation of machine translation , where the score is calculated based on the modified n-gram precision .
this approach attempts to improve translation quality by optimizing an automatic translation evaluation metric , such as the bleu score .
the first and most effective method is to simply use an objective measure of translation quality , such as bleu .
also related , riedel et al try to generalize over open ie extractions by combining knowledge from freebase and globally predicting which unobserved propositions are true .
yao et al and riedel et al present a similar task of predicting novel relations between freebase entities by appealing to a large collection of open ie extractions .
we follow earlier work in using number of edges pushed as the primary , hardware-invariant metric for evaluating performance of our algorithms .
we follow pauls and klein in using the number of items pushed as a machine-and implementation-independent measure of speed .
finally , we experiment with adding a 5-gram modified kneser-ney language model during inference using kenlm .
finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data .
in this paper , in order to overcome the data sparsity problem , we propose the use of word embeddings .
in this work , we propose to use word embeddings to fight against the data sparsity problem of word pairs .
we use the europarl english-french parallel corpus plus around 1m segments of symantec translation memory .
we build a french tagger based on englishfrench data from the europarl corpus .
the word embeddings are initialized with the publicly available word vectors trained through glove 5 and updated through back propagation .
pretrained 100-dimensional word vectors in the embedding layer are obtained using the glove method trained on a corpus of pubmed open source articles , and are updated during the training process .
word alignment is the task of identifying word correspondences between parallel sentence pairs .
word alignment is a key component of most endto-end statistical machine translation systems .
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
furthermore , we train a 5-gram language model using the sri language toolkit .
in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context .
ravuri and stolcke proposed an rnn architecture for intent determination .
ravuri and stolcke first proposed an rnn architecture for intent determination .
crfs are undirected graphic models that use markov network distribution to learn the conditional probability .
they are undirected graphical models trained to maximize a conditional probability .
previous approaches mainly focus on the use of knowledge resources like lexical semantic databases or thesauri as background information in order to resolve possible semantic relations .
one focuses on the use of knowledge resources like wordnet or thesauri as background information in order to quantify semantic relations between words .
wordnet domains was created by extending the princeton wordnet with domains labels .
the wordnet domains resource assigns domain labels to synsets in wordnet .
word alignment is the task of identifying word correspondences between parallel sentence pairs .
word alignment is the task of identifying corresponding words in sentence pairs .
kalchbrenner et al , 2014 ) proposes a cnn framework with multiple convolution layers , with latent , dense and low-dimensional word embeddings as inputs .
to capture the relation between words , kalchbrenner et al propose a novel cnn model with a dynamic k-max pooling .
in this paper , we aim to investigate a more challenging task of cross-language review rating prediction , which makes use of only rated reviews in a source language ( e . g . english ) .
in this paper , we study a new task of cross-language review rating prediction and propose a new co-regression algorithm to address this task .
we present a robust method for mapping dependency trees to logical forms that represent underlying predicate-argument structures .
we address this by introducing a robust system based on the lambda calculus for deriving neo-davidsonian logical forms from dependency trees .
extensive experiments show that our approach can effectively utilize the syntactic knowledge from another treebank .
moreover , an indirect comparison indicates that our approach also outperforms previous work based on treebank conversion .
abstract meaning representation is a popular framework for annotating whole sentence meaning .
the abstract meaning representation is a readable and compact framework for broad-coverage semantic annotation of english sentences .
tang et al and zhuang et al formalized the problem of social relationship learning into a semi-supervised framework , and proposed partially-labeled pairwise factor graph model for learning to infer the type of social ties .
tang et al and zhuang et al formalized the problem of social relationship learning as a semi-supervised framework , and proposed partially-labeled pairwise factor graph model for inferring the types of social ties .
working instead with the linear structure of raw text , collobert et al trained a neural language model to induce word vectors in the hidden layer of their network .
collobert et al trained a neural net language model on a snapshot of the english wikipedia and published the feature vectors 1 induced for each word in the first hidden layer of the network .
peters et al propose a deep neural model that generates contextual word embeddings which are able to model both language and semantics of word use .
recently , peters et al introduced elmo , a system for deep contextualized word representation , and showed how it can be used in existing task-specific deep neural networks .
relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
as a baseline model we develop a phrase-based smt model using moses .
for our baseline we use the moses software to train a phrase based machine translation model .
for classification , our solution uses a match-lstm to perform word-by-word matching of the hypothesis with the premise .
instead , we use an lstm to perform word-by-word matching of the hypothesis with the premise .