sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
in this paper , we investigated a compositional and a context-based approach .
according to the second observation , we use an existing context-based approach .
conditional random fields are undirected graphical models that are conditionally trained .
conditional random fields are discriminative structured classification models for sequential tagging and segmentation .
sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) .
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .
we propose a method that can leverage unlabeled data to learn a matching model .
we propose a new method that can effectively leverage unlabeled data for learning matching models .
the iwslt phrase-based baseline system is trained on all available bilingual data , and uses a 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit .
the baseline system was trained on all available bilingual data and used a 4-gram lm with modified kneserney smoothing , trained with the srilm toolkit .
detecting stance in tweets is a new task proposed for semeval-2016 ( cite-p-12-1-20 ) .
detecting stance in tweets is a new task proposed for semeval-2016 task6 , involving predicting stance for a dataset of tweets on the topics of abortion , atheism , climate change , feminism and hillary clinton .
we evaluate the performance of different translation models using both bleu and ter metrics .
we report results for this system alone , as well as for each of our three encoding schemes , using the bleu metric .
in this paper , we describe the system submitted to the semeval-2010 task 11 on event detection .
in this paper , we propose a modular approach for the semeval-2010 task on chinese event detection .
rooth et al use an em-based clustering technique to induce a clustering based on the co-occurrence frequencies of verbs with their subjects and direct objects .
rooth et al have proposed a soft-clustering method to determine selectional preferences , which models the joint distribution of nouns n and verbs v by conditioning them on a hidden class c .
on the simlex999 word similarity dataset , our model achieves a spearman ¡¯ s math-w-1-1-0-111 score of 0 . 517 , compared to 0 . 462 of the state-of-the-art word2vec model .
on simlex999 , our model is superior to six strong baselines , including the state-of-the-art word2vec skip-gram model by as much as 5.5¨c16.7 % in spearman¡¯s ¦ñ score .
dependency parsing is a basic technology for processing japanese and has been the subject of much research .
dependency parsing is a fundamental task for language processing which has been investigated for decades .
we represent input words using pre-trained glove wikipedia 6b word embeddings .
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings .
the hierarchical phrase-based model is capable of capturing rich translation knowledge with the synchronous context-free grammar .
the hierarchical phrasebased translation model , which adopts a synchronous context-free grammar , is considered to be prominent in capturing global reorderings .
to measure the importance of the generated questions , we use lda to identify the important sub-topics from the given body of texts .
to measure the importance of the generated questions , we use lda to identify the important subtopics 9 from the given body of texts .
that can be used to improve the performance of endto-end lbr systems via textual enrichment .
the classes and probabilistic model can be used in textual enrichment to improve the performance of lbr endto-end systems .
yang and kirchhoff proposed a backoff model for phrase-based smt that translated word forms in the source language by hierarchical morphological phrase level abstractions .
yang and kirchhoff use phrase-based backoff models to translate words that are unknown to the decoder , by morphologically decomposing the unknown source word .
in this work , we provide an evaluation metric that uses the degree of overlap between two whole-sentence semantic structures .
however , there is no widely-used metric to evaluate whole-sentence semantic structures .
in this paper automatically extracts equivalent parts from feature structures and collapses them into a single packed feature structure .
this method automatically extracts equivalent parts of feature structures and collapses them into a single packed feature structure .
and we were able to show that increasing the depth up to 29 convolutional layers steadily improves performance .
we were able to show that performance improves with increased depth , using up to 29 convolutional layers .
we used srilm -sri language modeling toolkit to train several character models .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
meng et al propose a generative cross-lingual mixture model to leverage unlabeled bilingual parallel data .
like lu et al , meng et al , 2012 also proposed their cross-lingual mixture model to leverage an unlabeled parallel dataset .
we propose a replicability analysis framework for a statistically sound analysis of multiple comparisons between algorithms .
we proposed a statistically sound replicability analysis framework for cases where algorithms are compared across multiple datasets .
we consider a simple linguistic constraint that a verb should not have multiple subjects / objects as its children .
we consider a simple constraint that a verb should not have multiple subjects/objects as its children .
shows help the audience absorb the essence of previous episodes , and grab their attention with upcoming plots .
recaps not only help the audience absorb the essence of previous episodes , but also grab people ’ s attention with upcoming plots .
sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .
for sdp where the target representation are no longer trees , kuhlmann and jonsson proposed to generalize the mst model to other types of subgraphs .
for semantic dependency parsing , where the target representations are not necessarily trees , kuhlmann and jonsson proposed to generalize the mst model to other types of subgraphs .
critics note that many of the statistical metrics do not generalize at all beyond two words , but pmi , the log ratio of the joint probability to the product of the marginal probabilities , is a prominent exception .
many of the statistical metrics do not generalize at all beyond two words , but pmi , the log ratio of the joint probability to the product of the marginal probabilities , is a prominent exception .
much of the current research into probabilistic parsing is founded on probabilistic contextfree grammars .
the use of generative probabilistic grammars for parsing is well understood .
we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .
the feature weights 位 m are tuned with minimum error rate training .
the weight parameter 位 is tuned by a minimum error-rate training algorithm .
barzilay and mckeown propose a text-to-text generation technique for synthesizing common information across documents using sentence fusion .
in summarization , barzilay and mckeown present a sentence fusion technique for multidocument summarization which needs to restructure sentences to improve text coherence .
we build upon our previous approach for joint concept disambiguation and clustering .
we build upon our previous markov logic based approach for joint concept disambiguation and clustering .
the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit .
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .
for example , minimum bayes risk decoding over n-best list finds a translation that has lowest expected loss with all the other hypotheses , and it shows that improvement over the maximum a posteriori decoding .
for example , minimum bayes risk decoding over n-best list tries to find a hypothesis with lowest expected loss with respect to all the other translations , which can be viewed as sentence-level consensus-based decoding .
to test this hypothesis , we use a latent dirichlet allocation model .
we use the term-sentence matrix to train a simple generative topic model based on lda .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .
we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing .
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .
mikolov et al proposed a distributed word embedding model that allowed to convey meaningful information on vectors derived from neural networks .
mikolov et al and mikolov et al introduce efficient methods to directly learn high-quality word embeddings from large amounts of unstructured raw text .
koo et al used a clustering algorithm to produce word clusters on a large amount of unannotated data and represented new features based on the clusters for dependency parsing models .
koo et al used the brown algorithm to learn word clusters from a large amount of unannotated data and defined a set of word cluster-based features for dependency parsing models .
in the following way : the system suggests an extension of the current translation prefix .
the system suggests full-sentence extensions of the current translation prefix .
a related approach is the query-by-example work seen in the past in interfaces to database systems ( cite-p-6-1-0 ) .
another related approach is the unification space model of kempen & cite-p-5-1-1 , which unifies through a process of simulated annealing , and also uses a notion of unification strength .
we used the moses decoder , with default settings , to obtain the translations .
for training the translation model and for decoding we used the moses toolkit .
the conll-x shared task was a large-scale evaluation of data-driven dependency parsers , with data from 13 different languages and 19 participating systems .
the conll-x shared task made a wide selection of standardized treebanks for different languages available for the research community and allowed for easy comparison between various statistical methods on a standardized benchmark .
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing .
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .
thanks to the emergence of distributed representations of words , words are transformed to vectors that capture precise semantic word relationships .
with word embeddings , each word is linked to a vector representation in a way that captures semantic relationships .
named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .
named entity recognition ( ner ) is a frequently needed technology in nlp applications .
we compare our approach to the lcseg algorithm and use sentences as segmentation unit .
we compare our approach to the lcseg algorithm which uses lexical chains to estimate topic boundaries .
we present indonet , a lexical resource created by merging wordnets of 18 different indian languages .
we present indonet , a multilingual lexical knowledge base for indian languages .
kennedy and inkpen explore negation shifting by incorporating negation bigrams as additional features into machine learning approaches .
kennedy and inkpen use syntactic analysis to capture language aspects like negation and contextual valence shifters .
in this and our other n-gram models , we used kneser-ney smoothing .
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing .
arabic is a morphologically rich language where one lemma can have hundreds of surface forms ; this complicates the tasks of sa .
moreover , arabic is a morphologically complex language .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
the model was built using the srilm toolkit with backoff and good-turing smoothing .
in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .
word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .
we initialize the embedding layer weights with glove vectors .
we initialize the word embedding matrix with pre-trained glove embeddings .
we use the stanford part-of-speech tagger and chunker to identify noun and verb phrases in the sentences .
we use the stanford part of speech tagger to annotate each word with its pos tag .
purpose of our work is to improve the performance of statistical machine translation systems .
ultimately , the purpose of this work is to improve the quality of machine translation systems .
for example : semeval-2014 ; semantic evaluation exercises .
6 for example : semeval-2014 ; semantic evaluation exercises .
we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser .
we use the stanford dependency parser to extract nouns and their grammatical roles .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing .
as a baseline system for our experiments we use the syntax-based component of the moses toolkit .
the baseline system for our experiments is the syntax-based component of the moses opensource toolkit of koehn et al and hoang et al .
second , these approaches are trained on the syntactic trees of the target language , which enables them to directly link the quality of newly induced categories .
finally , the mappings can be further constrained by typological properties of the target language that specify likely tag sequences .
we use a general , statistical framework in which arbitrary features extracted from a phrase pair can be incorporated to model the translation in a unified way .
first , we present a general , statistical framework for modeling phrase translations via mrfs , where different features can be incorporated in a unified manner .
in particular , the vector-space word representations learned by a neural network have been shown to successfully improve various nlp tasks .
distributed word representations induced through deep neural networks have been shown to be useful in several natural language processing applications .
our baseline is a phrase-based mt system trained using the moses toolkit .
moses is used as a baseline phrase-based smt system .
in this paper , we present discrex , the first approach for distant supervision to relation extraction .
in this paper , we propose the first approach for applying distant supervision to cross-sentence relation extraction .
we demonstrate superior performance over rule-based methods , as well as a significant reduction in the number of queries that yield null search .
we show that the proposed method offers superior accuracy over rule-based methods , as well as significant improvement in search recall .
argumentation features such as premise and support relation appear to be better predictors of a speaker ’ s influence rank compared to basic content .
our results show that although content alone is predictive of a speaker ’ s influence rank , persuasive argumentation also affects such indices .
we trained a tri-gram hindi word language model with the srilm tool .
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
we propose a fast and scalable method for semi-supervised learning of sequence models , based on anchor .
we proposed an efficient semi-supervised sequence labeling method using a generative log-linear model .
here we suggest borrowing the mean reciprocal rank metric from the information retrieval domain .
therefore , we use the mean reciprocal rank , a standard metric used for evaluating ranked retrieval systems .
deep neural networks have shown great success in many nlp tasks such as machine translation , reading comprehension , sentiment classification , etc .
recently , attentive neural networks have shown success in several nlp tasks such as machine translation , image captioning , speech recognition and document classification .
relation extraction is a fundamental task in information extraction .
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
it has been shown that ebm practitioners often do not pursue evidence based answers to clinical questions because of the time required .
consequently , practitioners often fail to provide evidence-based answers to clinical queries , particularly at point of care .
djuric et al propose an approach that learns low-dimensional , distributed representations of user comments in order to detect expressions of hate speech .
djuric et al , 2015 ) also build a binary classifier to classify in between hate speech and clean user comments on a website .
in the first phase , the sentence-plan-generator ( spg ) generates a potentially large sample of possible sentence plans .
in the first phase , the sentence-plan-generator ( spg ) generates a potentially large sample of possible sentence plans for a given text-plan input .
with the more fine-grained feedback increasingly available on social media platforms ( e . g . laughter , love , anger , tears ) , it may be possible to distinguish different types of popularity .
with the more fine-grained feedback increasingly available on social media platforms ( e.g . laughter , love , anger , tears ) , it may be possible to distinguish different types of popularity as well as levels , e.g . shared sentiment vs. humor .
in this paper we use paragraph vector , proposed by , to build unsupervised language models .
we used a paragraph vector model to obtain these phrase embeddings .
we also compare word n-grams using the jaccard coefficient as previously done by lyon et al , and the containment measure .
after compiling two sets of n-grams , we compared them using the jaccard coefficient , following lyon et al , as well as using the containment measure .
goldwater and griffiths propose a bayesian approach for learning the hmm structure .
goldwater and griffiths employ a bayesian approach to pos tagging and use sparse dirichlet priors to minimize model size .
we use the moses toolkit to create a statistical phrase-based machine translation model built on the best pre-processed data , as described above .
we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data .
we investigate an effective adaptation of phrase-based mt to map a twitter phrase to a medical concept .
we have introduced our approach that adapts a phrase-based mt technique to normalise medical terms in twitter messages .
word segmentation is the first step prior to word alignment for building statistical machine translations ( smt ) on language pairs without explicit word boundaries such as chinese-english .
word segmentation is a fundamental task for chinese language processing .
for testing purposes , we used the wall street journal part of the penn treebank corpus .
we ran experiments on the wall street journal portion of the english penn treebank data set , using a standard data split .
in both setups ¨c the sentence-external features do not improve over a baseline that captures basic morphosyntactic properties of the constituents ¨c .
surprisingly , in both settings , the sentence-external features perform poorly compared to the sentence-internal ones , and do not improve over a baseline model capturing the syntactic functions of the constituents .
the phrase structure trees produced by the parser are further processed with the stanford conversion tool to create dependency graphs .
sentences are passed through the stanford dependency parser to identify the dependency relations .
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .
the experimental results showed that our method is 16 . 94 and 450 times faster than traditional polynomial kernel in terms of training and testing .
the experimental results showed that our method close to the performance of polynomial kernel svm and better than the linear kernel .
the reordering rules are based on parse output produced by the stanford parser .
the base pcfg uses simplified categories of the stanford pcfg parser .
in order to cluster lexical items , we use the algorithm proposed by brown et al , as implemented in the srilm toolkit .
to determine the word classes , one can use the algorithm of brown et al , which finds the classes that give high mutual information between the classes of adjacent words .
we compared sn models with two different pre-trained word embeddings , using either word2vec or fasttext .
since our dataset is not so large , we make use of pre-trained word embeddings , which are trained on a much larger corpus with word2vec toolkit .
then we use the stanford parser to determine sentence boundaries .
we use the stanford parser for obtaining all syntactic information .
at most cubes the grammar size , but we show empirically that the size increase is only quadratic .
this transformation at most doubles the grammar¡¯s rank and cubes its size , but we show that in practice the size increase is only quadratic .
in our current study , we propose a method for extracting search subtasks from a given collection of queries .
in this work , we focus on extracting subtasks from a given collection of on-task search queries .
we use the word2vec tool with the skip-gram learning scheme .
the word embeddings are pre-trained by skip-gram .
we perform chinese word segmentation , pos tagging , and dependency parsing for the chinese sentences with stanford corenlp .
we use stanford corenlp to dependency parse sentences and extract the subjects and objects of verbs .
in recent years , neural machine translation based on encoder-decoder models has become the mainstream approach for machine translation .
neural machine translation has recently become the dominant approach to machine translation .
the grammar consists of head-dependent relations between words and can be learned automatically from a raw corpus using the reestimation algorithm which is also introduced in this paper .
this grammar consists of a lexicon which pairs words or phrases with regular expression functions .
the target-side language models were estimated using the srilm toolkit .
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .
we converted the pcfg trees into dependency trees using the collins head rules .
to convert phrase trees to dependency structures , we followed the commonly used scheme .
coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .