sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
lastly , wu et al implemented a special dependency parser for opinion mining that used phrases as the primitive building blocks .
for opinion mining , wu et al also utilized a dependency structure based on mwus , although they restricted mwus with predefined relations .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research .
twitter is a popular microblogging service which provides real-time information on events happening across the world .
we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser .
the dts are based on collapsed dependencies from the stanford parser in the holing operation .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we used a 4-gram language model which was trained on the xinhua section of the english gigaword corpus using the srilm 4 toolkit with modified kneser-ney smoothing .
in addition , we can use pre-trained neural word embeddings on large scale corpus for neural network initialization .
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .
this paper proposed hybrid models of lexical semantics that combine distributional and knowledge-based approaches .
this paper proposes hybrid models of lexical semantics that combine the advantages of these two approaches .
in this paper , we propose the use of autoencoders based on long short term memory neural networks for capturing long distance relationships between phonemes in a word .
finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network .
we use the glove word vector representations of dimension 300 .
as the word embeddings , we used the 300 dimension vectors pre-trained by glove 6 .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
mrlsa provides an elegant approach to combining multiple relations between words .
in comparison , mrlsa models multiple lexical relations holistically .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing .
non-admissible , the parsing speed improves even further , at the risk of returning suboptimal solutions .
by using a non-admissible heuristics , the speed improves by orders of magnitude , at the expense of parsing quality .
conditional random fields are conditional models in the exponential family .
conditional random fields are probabilistic models for labelling sequential data .
a hierarchical phrase-based translation model reorganizes phrases into hierarchical ones by reducing sub-phrases to variables .
in the hierarchical phrase-based translation method , the translation rules are extracted by abstracting some words from an initial phrase pair .
the 4-gram language model was trained with the kenlm toolkit on the english side of the training data and the english wikipedia articles .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
feature weights were set with minimum error rate training on a development set using bleu as the objective function .
all features were log-linearly combined and their weights were optimized by performing minimum error rate training .
in this section , we will describe the ibm constraints .
in the following , we will call these the itg constraints .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
we chose the three models that achieved at least one best score in the closed tests from emerson , as well as the sub-word-based model of zhang et al for comparison .
we chose the three models that achieved at least one best score in the closed tests from emerson , as well as the sub-word-based model of zhang , kikui , and sumita for comparison .
the embedded word vectors are trained over large collections of text using variants of neural networks .
these word vectors can be randomly initialized from a uniform distribution , or be pre-trained from text corpus with embedding learning algorithms .
erk and pad贸 incorporate inverse selectional preferences into their contextualization function .
erk and pad贸 employ selectional preferences to contextualize occurrences of target words .
transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language .
transliteration is a process of translating a foreign word into a native language by preserving its pronunciation in the original language , otherwise known as translationby-sound .
we use the lexicon created by hu and liu , which consists of 2,006 positive words and 4,783 negative words .
the negative words and positive words come from the dictionary provided by hu and liu .
as stated above , we aim to build an unsupervised generative model for named entity clustering , since such a model could be integrated with unsupervised coreference models like haghighi and klein for joint inference .
to gauge the performance of our model , we compare it with a bayesian model for unsupervised coreference resolution that was recently proposed by haghighi and klein .
dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages .
dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation .
we use pre-trained vectors from glove for word-level embeddings .
we use pre-trained glove vector for initialization of word embeddings .
using these paradigms , we perform a comprehensive evaluation of explanation methods for nlp ( § 3 ) .
we have attempted to include all important local methods for nlp in our experiments ( see §3 ) .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
models were built and interpolated using srilm with modified kneser-ney smoothing and the default pruning settings .
a good ranking is the one that the perfectmatch and the relevant questions are both ranked above the irrelevant ones .
a good ranking is the one that ranks all good comments above potentiallyuseful and bad ones .
besides , chinese is a topic-prominent language , the subject is usually covert and the usage of words is relatively flexible .
more importantly , chinese is a language that lacks the morphological clues that help determine the pos tag of a word .
knowledge bases like freebase , dbpedia , and nell are extremely useful resources for many nlp tasks .
knowledge graphs such as freebase , yago and wordnet are among the most widely used resources in nlp applications .
koehn and knight used similarity in spelling as another kind of cue that a pair of words may be translations of one another .
koehn and knight use similarity in spelling as another kind of cue that a pair of words may be translations of one another .
parallel sentences are used in many natural language processing applications , particularly for automatic terminology extraction and statistical machine translation .
sentence and word aligned parallel corpora are extensively used for statistical machine translation and in multilingual natural language processing applications .
popovic and ney investigate improving translation qual-ity from inflected languages by using stems , suffixes and part-of-speech tags .
popovic and ney investigated improving translation quality from inflected languages by using stems , suffixes and part-ofspeech tags .
on real-world tasks , our method achieves 7 times speedup on citation matching , and 13 times speedup on large-scale author .
on real-world tasks , our method achieves 7 times speedup on citation matching , and 13 times speedup on large-scale author disambiguation .
ever since the pioneering article of gildea and jurafsky , there has been an increasing interest in automatic semantic role labeling .
since gildea and jurafsky pioneered statistical semantic role labeling , there has been a great deal of computational work using predicate-argument structures for semantics .
stance detection is the task of automatically determining from text whether the author is in favor of the given target , against the given target , or whether neither inference is likely .
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against .
as with , we train the language model on the penn treebank .
we use the penn wsj treebank for our experiments .
a problem text is split into fragments where each fragment is represented as a transition between two world states in which the quantities of entities are updated or observed .
a problem text is split into fragments where each fragment corresponds to an observation or an update of the quantity of an entity in one or two containers .
wordseye is a system for automatically converting natural language text into 3d scenes representing the meaning of that text .
wordseye , is a system for automatically converting natural language text into 3d scenes representing the meaning of that text .
the model was built using the srilm toolkit with backoff and kneser-ney smoothing .
in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit .
goldwater and mcclosky show improvements in a czech to english word-based translation system when inflectional endings are simplified or removed entirely .
goldwater and mcclosky use morphological analysis on the czech side to get improvements in czech-to-english statistical machine translation .
behind our method is to utilize certain layout structures and linguistic pattern .
the idea behind our method is to utilize certain layout structures and linguistic pattern .
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .
relation extraction is a core task in information extraction and natural language understanding .
paraphrase identification ( pi ) is a task that recognizes whether a pair of sentences is a paraphrase .
paraphrase identification ( pi ) may be defined as “ the task of deciding whether two given text fragments have the same meaning ” ( lintean & rus 2011 ) .
we used the google news pretrained word2vec word embeddings for our model .
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features .
we substitute our language model and use mert to optimize the bleu score .
we report bleu scores to compare translation results .
we use the word2vec tool with the skip-gram learning scheme .
we perform pre-training using the skipgram nn architecture available in the word2vec tool .
collobert et al adapted the original cnn proposed by lecun and bengio for modelling natural language sentences .
collobert et al use a convolutional neural network over the sequence of word embeddings .
the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit .
the weights associated to feature functions are optimally combined using the minimum error rate training .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
for the evaluation of machine translation quality , some standard automatic evaluation metrics have been used , like bleu , nist and ribes in all experiments .
for evaluation of machine translation quality , standard automatic evaluation metrics are used , like bleu and ribes in all experiments .
sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) .
sentiment analysis is a research area in the field of natural language processing .
results indicate that it is important not to restrict the model to local dependencies .
these results demonstrate that this model benefits greatly from the inclusion of long-range dependencies .
neural models have shown great success on a variety of tasks , including machine translation , image caption generation , and language modeling .
recently , neural networks , and in particular recurrent neural networks have shown excellent performance in language modeling .
yang et al and proposed a hierarchical rnn model to learn attention weights based on the local context using an unsupervised method .
yang et al introduced an attention mechanism using a single matrix and outputting a single vector .
the usage of deep-learning methods such as deep belief networks and autoencoders have also been explored for qa retrieval .
neural networks such as dbns and more sophisticated neural pipelines have been explored for cqa retrieval .
dependency parsing is a topic that has engendered increasing interest in recent years .
dependency parsing is a central nlp task .
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .
a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm .
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
we presented a new corpus for context-dependent semantic parsing .
we also develop a semantic parser for this corpus .
one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .
sentiment analysis is a growing research field , especially on web social networks .
lexical simplification is the task of identifying and replacing cws in a text to improve the overall understandability and readability .
lexical simplification is the task of modifying the lexical content of complex sentences in order to make them simpler .
the language model is trained and applied with the srilm toolkit .
language models were built using the srilm toolkit 16 .
evaluating using diverse data demonstrated the effectiveness of our techniques .
experimental results show that our techniques are promising .
in experiments on 21 language pairs from four different language families , we obtain up to 58 % higher accuracy than without transfer .
we conduct experiments on 21 language pairs from four language families , emulating a low-resource setting .
to verify sentence generation quantitatively , we evaluated the sentences automatically using bleu score .
we evaluated the translation quality using the case-insensitive bleu-4 metric .
the sg model is a popular choice to learn word embeddings by leveraging the relations between a word and its neighboring words .
the skip-gram model aims to find word representations that are useful for predicting the surrounding words in a sentence or document .
later , xue et al combined the language model and translation model to a translation-based language model and observed better performance in question retrieval .
previous work consistently reported that the wordbased translation models yielded better performance than the traditional methods for question retrieval .
t盲ckstr枚m et al explore the use of mixed type and token annotations in which a tagger is learned by projecting information via parallel text .
a different approach to cross-lingual pos tagging is proposed by t盲ckstr枚m et al who couple token and type constraints to guide learning .
we also compare our model to an endto-end lstm model by miwa and bansal which comprises of a sequence layer for entity extraction and a tree-based dependency layer for relation classification .
the bilstm-gcn encoder part of our model resembles the bilstm-treelstm model proposed by miwa and bansal , as they also stack a dependency tree on top of sequences to jointly model entities and relations .
we use srilm for n-gram language model training and hmm decoding .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
in our corpus , about 26 % questions do not need context , 12 % questions need type 1 context , 32 % need type 2 context .
in our corpus , about 26 % questions do not need context , 12 % questions need type 1 context , 32 % need type 2 context and 30 % type 3 .
to address the costs of inference step , we apply an efficient sampling procedure via stochastic gradient langevin dynamics .
we first consider the stochastic gradient langevin dynamics sampler to generate posterior samples .
a multiword expression ( mwe ) is a phrase or sequence of words which exhibits idiosyncratic behaviour ( cite-p-10-1-7 , cite-p-10-1-0 ) .
a multiword expression ( mwe ) is a combination of words with lexical , syntactic or semantic idiosyncrasy ( cite-p-14-3-12 , cite-p-14-1-0 ) .
word sense disambiguation is the task of identifying the intended meaning of a given target word from the context in which it is used .
word sense disambiguation is the process of determining which sense of a homograph is correct in a given context .
a translation model is induced between phonemes in two wordlists by combining the maximum similarity alignment with the competitive linking algorithm of melamed .
the translation model is induced by combining the maximum similarity alignment with the competitive linking algorithm of melamed .
surdeanu et al describe an extended model , where each entity pair may link multiple instances to multiple relations .
surdeanu et al propose a two-layer multi-instance multi-label framework to capture the dependencies among relations .
our system is based on the conditional random field .
our model is a structured conditional random field .
tsvetkov et al create synthetic translation options to augment the phrase-table .
tsvetkov et al create synthetic translation options to augment a standard phrase-table .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu .
for efficiency , we follow the hierarchical softmax optimization used in word2vec .
next we consider the context-predicting vectors available as part of the word2vec 6 project .
a skip-gram model from mikolov et al was used to generate a 128-dimensional vector of a particular word .
mikolov et al proposed a distributed word embedding model that allowed to convey meaningful information on vectors derived from neural networks .
another authoring assistant was developed in the a-propos project .
another assistant for an authoring environment was developed in the a-propos project .
for building our statistical ape system , we used maximum phrase length of 7 and a 5-gram language model trained using kenlm .
for building our ap e b2 system , we set a maximum phrase length of 7 for the translation model , and a 5-gram language model was trained using kenlm .
we take fully advantage of questions ’ textual descriptions to address data sparseness problem and cold-start problem .
by incorporating textual information , rcm can effectively deal with data sparseness problem .
parser produces a full syntactic parse of any sentence , while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser ’ s predicate vocabulary .
our parser produces a full syntactic parse of every sentence , and furthermore produces logical forms for portions of the sentence that have a semantic representation within the parser ’ s predicate vocabulary .
in the above examples , classifier ¡° hiki ¡± is used to count noun ¡° inu ( dog ) ¡± .
in the above examples , classifier ¡°hiki¡± is used to count noun ¡°inu ( dog ) ¡± , while ¡°satsu¡± for ¡°hon ( book ) ¡± .
in our work , we use latent dirichlet allocation to identify the sub-topics in the given body of texts .
to measure the importance of the generated questions , we use lda to identify the important subtopics 9 from the given body of texts .
in order to do so , we use the moses statistical machine translation toolkit .
we use moses , a statistical machine translation system that allows training of translation models .
the language model pis implemented as an n-gram model using the irstlm-toolkit with kneser-ney smoothing .
the n-gram based language model is developed by employing the irstlm toolkit .
however , hand-labeled training data is expensive to produce , in low coverage of event types , and limited in size , which makes supervised methods hard to extract large scale of events .
however , these supervised methods depend on the quality of the training data and labeled training data is expensive to produce .
we use the skll and scikit-learn toolkits .
we used standard classifiers available in scikit-learn package .
this approach works well for many applications , such as phrase similarity , multi-document summarization , and word sense induction , even though it disregards the order of the words .
this approach works well for many applications , such as phrase similarity and multidocument summarization , even though it disregards the order of the words .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .
we applied our algorithms to word-level alignment using the english-french hansards data from the 2003 naacl shared task .
we analyzed 447 hand-aligned french-english sentences from the naacl 2003 alignment workshop .
two of these features together , we finally outperform the continuous embedding features by nearly 2 points of f1 score .
moreover , the combination of the approaches provides additive improvements , outperforming the dense and continuous embedding features by nearly 2 points of f1 score .
the weights of the different feature functions were tuned by means of minimum error-rate training executed on the europarl development corpus .
the ape system for each target language was tuned on comparable development sets , optimizing ter with minimum error rate training .
next , the output of the max-pooling layer is passed to a dropout layer .
dropout is performed at the input of each lstm layer , including the first layer .