sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
furthermore , we train a 5-gram language model using the sri language toolkit .
a residual connection is employed around each of two sub-layers , followed by layer normalization .
we add a residual connection around each of the two sub-layers , followed by layer normalization .
system tuning was carried out using both k-best mira and minimum error rate training on the held-out development set .
parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set .
coreference resolution is a field in which major progress has been made in the last decade .
coreference resolution is a well known clustering task in natural language processing .
sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic .
sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text .
we use the skll and scikit-learn toolkits .
for training our system classifier , we have used scikit-learn .
similarly to sagae and tsujii , the system presented by damonte , cohen , and satta extends standard approaches for transition-based dependency parsing to amr parsing , allowing re-entrancies .
on the other hand , sagae and tsujii propose a transition-based counterpart for dag parsing which made available for parsing multi-headed relations .
mann and thompson mann and thompson introduce rhetorical structure theory , which was originally developed during the study of automatic text generation .
mann and thompson introduce rhetorical structure theory , which was originally developed during the study of automatic text generation .
analytics over large quantities of unstructured text has led to increased interest in information extraction technologies .
the rise of “ big data ” analytics over unstructured text has led to renewed interest in information extraction ( ie ) .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
furthermore , we train a 5-gram language model using the sri language toolkit .
the data sets used are taken from the conll-x shared task on multilingual dependency parsing .
we use the treebanks from the conll shared tasks on dependency parsing for evaluation .
for example , riaz and girju and do et al have proposed unsupervised metrics for learning causal dependencies between two events .
for example , riaz and girju and do et al introduced unsupervised metrics to learn causal dependencies between events .
srl models have also been trained using graphical models and neural networks .
distributed word representations have been shown to improve the accuracy of ner systems .
deep neural networks have gained recognition as leading feature extraction methods for word representation .
development in neural network and deep learning based language processing has led to the development of more powerful continuous vector representation of words .
we relax this assumption by extending the model to be non-parametric , using a hierarchical dirichlet process , .
to test this hypothesis , we extended our model to incorporate bigram dependencies using a hierarchical dirichlet process .
we perform minimum error rate training to tune various feature weights .
we use minimal error rate training to maximize bleu on the complete development data .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
in this paper , we present a linguistically motivated rule-based system for the detection of negation and speculation scopes .
this paper presents scopefinder , a linguistically motivated rule-based system for the detection of negation and speculation scopes .
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing .
we trained the l1-regularized logistic regression classifier implemented in liblinear .
we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn .
bond et al use grammars to paraphrase the source side of training data , covering aspects like word order and minor lexical variations but not content words .
bond et al use grammars to paraphrase the whole source sentence , covering aspects like word order and minor lexical variations , but not content words .
the bleu-4 metric implemented by nltk is used for quantitative evaluation .
the bleu metric has been used to evaluate the performance of the systems .
declarative knowledge can be learned from data with very limited human involvement .
however , declarative knowledge is still created in a costly manual process .
we first removed all sgml mark-up , and performed sentence-breaking and tokenization using the stanford corenlp toolkit .
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp .
luke is a knowledge base editor that has been enhanced to support entering and maintaining the semantic mappings needed by a natural language interface to a knowledge base .
luke is a knowledge editor designed to support two tasks ; the first is editing the classes and relations in a knowledge base .
we define the position of m4 to be right after m3 ( because “ the ” is after “ held ” in leftto-right order .
in figure 1 we define the position of m4 to be right after m3 ( because “ the ” is after “ held ” in leftto-right order on the target side ) .
wikipedia is a free , collaboratively edited encyclopedia .
wikipedia is a constantly evolving source of detailed information that could facilitate intelligent machines — if they are able to leverage its power .
lagrangian relaxation is a classical technique in combinatorial optimization .
lagrangian relaxation is a classical technique in combinatorial optimization ( cite-p-15-1-10 ) .
chen et al , 2012 ) used lexical and parser features , for detecting comments from youtube that are offensive .
chen et al introduced a lexical syntactic feature architecture to detect offensive content and identify potential offensive users in social media .
word-based and phrase-based models eventually work on the word level .
the word-based approach assumes one-to-one aligned source and target sentences .
brown clusters have been used to good effect for various nlp tasks such as named entity recognition and dependency parsing .
brown clusters have previously been shown to improve statistical dependency parsing , as well as other nlp tasks such as chunking and named entity recognition .
garrette et al introduced a framework for combining logic and distributional models using probabilistic logic .
garrette et al propose a framework for combining logic and distributional models in which logical form is the primary meaning representation .
base nps provides an accurate and fast bracketing method , running in time linear in the length of the tagged text .
the base np bracketer is very fast , operating in time linear in the length of the text .
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .
we divide the sentences into three types according to triplet overlap degree , including normal , entitypairoverlap .
we divided the sentences into three types according to triplet overlap degree , including normal , entitypairoverlap and singleentiyoverlap .
all the language models are built with the sri language modeling toolkit .
a 4-grams language model is trained by the srilm toolkit .
we use the maximum entropy model for our classification task .
we use the maximum entropy model as a classifier .
for all three systems , we used the stanford corenlp package to perform lemmatization and pos tagging of the input sentences .
to generate these trees , we employ the stanford pos tagger 8 and the stack version of the malt parser .
socher et al first proposed a neural reranker using a recursive neural network for constituent parsing .
socher et al built a recursive neural network for constituent parsing .
this paper proposes a novel pu learning ( mpipul ) technique to identify deceptive reviews .
in this paper , a novel pu learning ( mpipul ) is proposed to identify deceptive reviews .
in this paper , we proposed a linguistic approach to preference aquisition that aims to infer preferences from dialogue moves .
to this end , we propose a new annotation scheme to study how preferences are linguistically expressed in dialogues .
we combine the global hyperlink structure of wikipedia with a local bag-of-words probabilistic model .
in this paper , we extend the model with a global model which takes the hyperlink structure of wikipedia into account .
for the out-of-domain testsets , we obtained statistically significant overall improvements , but we were hampered by the small sizes of the testsets .
for the out-of-domain testsets , we obtained statistically significant overall improvements , but we were hampered by the small sizes of the testsets in evaluating unseen/wh words .
zhang approaches the relation classification problem with bootstrapping on top of svm .
the third system approaches relation classification problem with bootstrapping on top of svm , proposed by zhang .
the most common word embeddings used in deep learning are word2vec , glove , and fasttext .
some of the commonly used word representation techniques are word2vec , glove , neural language model , etc .
to exploit these kind of labeling constraints , we resort to conditional random fields .
specifically , we adopt linear-chain conditional random fields as the method for sequence labeling .
we use 300-dimensional word embeddings from glove to initialize the model .
for the mix one , we also train word embeddings of dimension 50 using glove .
in this study , we explore the role of linguistic context in predicting generalized quantifiers .
this study explored the role of linguistic context in predicting quantifiers .
a 4-gram language model was trained on the monolingual data by the srilm toolkit .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
in this paper we propose a graph-theoretic model for tweet recommendation that presents users with items .
in this paper we present a graph-theoretic approach to tweet recommendation that attempts to address these challenges .
however , spanish is the third language most used on the internet , with a total of 7.7 % ( more than 277 million of users ) and a huge internet growth of more than 1,400 % .
spanish is the third-most used language on the internet , after english and chinese , with a total of 7.7 % of internet users ( more than 277 million of users ) and a huge users growth of more than 1,400 % .
we evaluated our model on the fine-grained sentiment analysis task presented in socher et al and compare to their released system .
finally , we examined our clustering method on the sentiment analysis task from socher et al sentiment treebank dataset and showed that it improved performance versus comparable models .
in this work , we propose a parsing method that handles both disfluency and asr errors .
in this work , we have proposed a novel joint transition-based dependency parsing method with disfluency detection .
we use corpus-level bleu score to quantitatively evaluate the generated paragraphs .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
poon and domingos present a markov logic network approach to unsupervised coreference resolution .
poon and domingos introduced an unsupervised system in the framework of markov logic .
in this paper we investigate distributed training strategies for the structured perceptron .
in this paper we have investigated distributing the structured perceptron via simple parameter mixing strategies .
this paper presents an alternative way of pruning unnecessary edges .
this paper presents an efficient k-best parsing algorithm for pcfgs .
coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity .
coreference resolution is a field in which major progress has been made in the last decade .
b aye s um can be understood as a justified query expansion technique in the language modeling for ir framework .
that is , b aye s um is a statistically justified query expansion method in the language modeling for ir framework ( cite-p-21-3-6 ) .
our learning-based approach , which uses features that include handcrafted rules and their predictions , outperforms its rule-based counterpart by more than 20 % , achieving an overall accuracy of 78 . 7 % .
in an evaluation on 147 switchboard dialogues , our learning-based approach to fine-grained is determination achieves an accuracy of 78.7 % , substantially outperforming the rule-based approach by 21.3 % .
studies suggests that all translated texts , irrespective of source language , share some so-called translation .
numerous studies suggest that translated texts are different from original ones .
we investigate the applicability of part-of-speech tagging to typical englishlanguage .
in addition , we considered the applicability of part-of-speech tags to the question of query reformulation .
zhang and clark proposed an incremental joint segmentation and pos tagging model , with an effective feature set for chinese .
zhang and clark used a segment-based decoder for word segmentation and pos tagging .
we trained the statistical phrase-based systems using the moses toolkit with mert tuning .
we used the moses toolkit to train the phrase tables and lexicalized reordering models .
our baseline system is phrase-based moses with feature weights trained using mert .
the baseline of our approach is a statistical phrase-based system which is trained using moses .
that increase the perplexity and sparsity of nlp models .
these issues increase sparsity in nlp models and reduce accuracy .
for several test sets , supervised systems were the most successful in sts 2012 .
the approach remains equally successful on sts 2014 data .
manual evaluation of machine translation is too time-consuming and expensive to conduct .
manual evaluation of translation quality is generally thought to be excessively time consuming and expensive .
lexical chains are a representation of lexical cohesion as sequences of semantically related words .
lexical chains are sequences of semantically related words that can indicate topic shifts .
we use the sentiment pipeline of stanford corenlp to obtain this feature .
we used the dependency parser from the stanford corenlp .
for sentences , we tokenize each sentence by stanford corenlp and use the 300-d word embeddings from glove to initialize the models .
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .
summarising the content of these comments allows users to interact with the data at a higher level , providing a transparency to the underlying data .
summarisation of the comments allows interaction at a higher level and can lead to an understanding of the overall discussion .
for text-level absa ; the latter was introduced for the first time as a subtask .
in addition , se-absa16 included for the first time a text-level subtask .
coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity .
coreference resolution is the task of grouping mentions to entities .
in this paper , we present a fast and lightweight multilingual dependency parsing system for the conll 2017 ud shared task , which composed of a bilstms .
in this paper , we present our multilingual dependency parsing system mengest for conll 2017 ud shared task .
tasks , our models also surpass all the baselines including a morphology-based model .
the experimental results demonstrate that our models outperform the baselines on five word similarity datasets .
coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity .
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .
language models were built using the srilm toolkit 16 .
the language model was trained using srilm toolkit .
by applying these ideas to japanese why-qa , we improved precision by 4 . 4 % against all the questions in our test set .
through our experiments on japanese why-qa , we show that a combination of the above methods can improve why-qa accuracy .
in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
in this paper , we have studied the impact of argumentation in speaker ¡¯ s discourse .
in this work we study the use of semantic frames for modelling argumentation in speakers¡¯ discourse .
most of the following works focused on feature engineering and machine learning models .
most of the following work focused on feature engineering and machine learning models .
and provide bounds on the error made by the marginals of the relaxed graph in place of the full one .
we also contribute a bound on the error of the marginal probabilities by a subgraph with respect to the full graph .
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
context-free grammar augmented with ¦ë-operators is learned given a set of training sentences and their correct logical forms .
a semantic parser is learned given a set of training sentences and their correct logical forms using standard smt techniques .
topic modelling is a popular statistical method for clustering documents .
topic modeling is an unsupervised method to cluster documents based on context information .
the 5-gram target language model was trained using kenlm .
we trained an english 5-gram language model using kenlm .
we trained the statistical phrase-based systems using the moses toolkit with mert tuning .
our phrase-based mt system is trained by moses with standard parameters settings .
mohammad and yang has shown that there are marked differences across genders in how they use emotion words in work-place email .
for the gender identification task , mohammad and yang show that there are marked differences across genders in how they use emotion words in work-place email .
sentiment classification is a useful technique for analyzing subjective information in a large number of texts , and many studies have been conducted ( cite-p-15-3-1 ) .
sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment .
model we explore is based on four disjunct classes , which has been very regularly observed in scientific reports .
our simple model is based on four classes , which have been reported very stable in scientific reports of all kinds .
we selected conditional random fields as the baseline model .
we used conditional random fields for the machine learning task .
commonly used models such as hmms , n-gram models , markov chains and probabilistic finite state transducers all fall in the broad family of pfsms .
commonly used models such as hmms , n-gram models , markov chains , probabilistic finite state transducers and pcfgs all fall in the broad family of pfsms .
we use the skipgram model with negative sampling implemented in the open-source word2vec toolkit to learn word representations .
to obtain these features , we use the word2vec implementation available in the gensim toolkit to obtain word vectors with dimension 300 for each word in the responses .
for the machine learning component of our system we use the l2-regularised logistic regression implementation of the liblinear 3 software library .
we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts .
we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .
we use the pre-trained word2vec embeddings provided by mikolov et al as model input .
peters et al proposed the embeddings from language models , which obtains contextualized word representations .
peters et al show how deep contextualized word representations model both complex characteristics of word use , and usage across various linguistic contexts .
we use the scikit-learn machine learning library to implement the entire pipeline .
we use scikit-learn to implement the classifiers and accuracy scores to measure the predictability .
we used the pre-trained google embedding to initialize the word embedding matrix .
we used 300 dimensional skip-gram word embeddings pre-trained on pubmed .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we train an english language model on the whole training set using the srilm toolkit and train mt models mainly on a 10k sentence pair subset of the acl training set .