sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
morphological disambiguation is a useful first step for higher level analysis of any language but it is especially critical for agglutinative languages like turkish , czech , hungarian , and finnish .
morphological disambiguation is a well studied problem in the literature , but lstm-based contributions are still relatively scarce .
we evaluate the performance of different translation models using both bleu and ter metrics .
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric .
dp beam search for phrase-based smt was described by koehn et al , extending earlier work on word-based smt .
belz and kow proposed another smt based nlg system which made use of the phrase-based smt model .
we investigate the need for bigram alignment models and the benefit of supervised alignment techniques .
we have investigated the need for bigram alignment models and the benefit of supervised alignment techniques in g2p .
hamilton et al report almost perfect accuracy for the procrustes transformation when detecting the direction of semantic change .
hamilton et al measured the variation between models by observing semantic change using diachronic corpora .
word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .
word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) .
for our experiments , we use 40,000 sentences from europarl for each language pair following the basic setup of tiedemann .
we perform the analysis with data from 110 different language pairs drawn from the europarl project .
given a set of question-answer pairs as the development set , we use the minimum error rate training algorithm to tune the feature weights 位 m i in our proposed model .
to optimize the feature weights for our model , we use viterbi envelope semiring training , which is an implementation of the minimum error rate training algorithm for training with an arbitrary loss function .
garg and henderson proposed a stack long short-term memory approach to supervised dependency parsing .
garg and henderson used rbm in a similar approach to dependency parsing .
sentences are tagged and parsed using the stanford dependency parser .
dependency parses are obtained from the stanford parser .
princeton wordnet 1 is an english lexical database that groups nouns , verbs , adjectives and adverbs into sets of cognitive synonyms , which are named as synsets .
princeton wordnet is an english lexical database that groups nouns , verbs , adjectives and adverbs into sets of cognitive synonyms , which are named as synsets .
we define the position set of math-w-7-11-0-40 , denoted by math-w-7-11-0-44 , as the set of all positions math-w-7-11-0-53 .
given an alphabet math-w-2-6-2-60 , we write math-w-2-6-2-64 for the set of all ( finite ) strings over math-w-2-6-2-76 .
the standard approach to word alignment from sentence-aligned bitexts has been to construct models which generate sentences of one language from the other , then fitting those generative models with em .
the standard approach to word alignment is to construct directional generative models , which produce a sentence in one language given the sentence in another language .
we used the moses toolkit to build mt systems using various alignments .
we used the moses toolkit for performing statistical machine translation .
documents show that our model is effective in exploiting both source and target document context , and statistically significantly outperforms the previous work in terms of bleu and meteor .
the experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context , and statistically significantly outperforms the previous work in terms of bleu and meteor .
dredze et al , show that domain adaptation is hard for dependency parsing based on results in the conll 2007 shared task .
dredze et al showed that many of the parsing errors in domain adaptation tasks may come from inconsistencies between the annotations of training resources .
neural machine translation has become the primary paradigm in machine translation literature .
neural machine translation has recently gained popularity in solving the machine translation problem .
favorable compares with a tomita parser and a chart parser parsing time when run on the same grammar and lexicon .
the parsing time favorable compares with a tomita parser and a chart parser parsing time when run on the same grammar and lexicon .
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .
word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context .
in this paper we describe the system submitted for the semeval 2014 sentiment analysis in twitter task ( task 9 subtask b ) .
in this paper we describe the system submitted for the semeval 2014 task 9 ( sentiment analysis in twitter ) subtask b .
in an example shown above , β€œ sad ” is an emotion word , and the cause of β€œ sad ” is β€œ .
in an example shown above , β€œ sad ” is an emotion word , and the cause of β€œ sad ” is β€œ i lost my phone ” .
both the structure and semantic constraints from knowledge bases can be easily exploited during parsing .
it has been shown that structure and semantic constraints are effective for enhancing semantic parsing .
semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis .
semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence .
drezde et al applied structural correspondence learning to the task of domain adaptation for sentiment classification of product reviews .
blitzer et al investigate domain adaptation for sentiment classifiers , focusing on online reviews for different types of products .
for training we use the adam optimizer with default values and mini-batches of 10 examples .
we train the model using the adam optimizer with the default hyper parameters .
for other researchers who wish to use our indexing machinery , it has been made available as free software .
our own implementation will be made available to other researchers as open source .
the phrase-based translation model has demonstrated superior performance and been widely used in current smt systems , and we employ our implementation on this translation model .
our work can be applied to any statistical machine translation paradigm and we will present results on a standard phrase-based translation system and a hierarchical phrase-based translation system .
shen et al , 2008 ) presents a string-to-dependency model , which restricts the target side of each hierarchical rule to be a well-formed dependency tree fragment , and employs a dependency language model to make the output more grammatically .
shen et al , 2008 ) extends the hierarchical phrase-based model and present a string-to-dependency model , which employs string-to-dependency rules whose source side are string and the target as well-formed dependency structures .
the grammatical framework for the krg is head-driven phrase structure grammar , a non-derivational , constraintbased , and surface-oriented grammatical architecture .
the lingo grammar matrix is situated theoretically within head-driven phrase structure grammar , a lexicalist , constraint-based framework .
we then use the stanford sentiment classifier developed by socher et al to automatically assign sentiment labels to translated tweets .
we then use the stanford sentiment classifier to automatically assign sentiment labels to translated tweets .
the scores of participants are in table 10 in terms of bleu and f 1 scores .
performance is measured based on the bleu scores , which are reported in table 4 .
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
the word vectors of vocabulary words are trained from a large corpus using the glove toolkit .
coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set .
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
relation extraction is a fundamental task in information extraction .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
we propose a novel framework for speech disfluency detection based on integer linear programming ( ilp ) .
we present a novel two-stage technique for detecting speech disfluencies based on integer linear programming ( ilp ) .
to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm .
after standard preprocessing of the data , we train a 3-gram language model using kenlm .
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities .
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity .
summarization is a classic text processing problem .
summarization is the process of condensing a source text into a shorter version while preserving its information content .
ambiguity is a problem in any natural language processing system .
ambiguity is the task of building up multiple alternative linguistic structures for a single input ( cite-p-13-1-8 ) .
using statistics from both standard and learner corpora , it generates plausible distractors .
focusing on prepositions , the system generates distractors based on error statistics compiled from learner corpora .
we review prior work on topic modeling for document collections and studies of social media like political blogs .
in this paper we applied several probabilistic topic models to discourse within political blogs .
we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
zou et al learn bilingual word embeddings by designing an objective function that combines unsupervised training with bilingual constraints based on word alignments .
the bilingual embedding research origins in the word embedding learning , upon which zou et al utilize word alignments to constrain translational equivalence .
we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings .
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .
in this paper , we propose a method for slu based on generative and discriminative models .
in this paper , we propose discriminative reranking of concept annotation to jointly exploit generative and discriminative models .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
our baseline is an in-house phrase-based statistical machine translation system very similar to moses .
our translation system is an in-house phrasebased system analogous to moses .
bleu has long been shown not to correlate well with human judgment on translation quality .
bleu exhibits a high correlation with human judgments of translation quality when measuring on large sections of text .
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .
relation extraction is the task of detecting and characterizing semantic relations between entities from free text .
we conducted baseline experiments for phrasebased machine translation using the moses toolkit .
we used the moses toolkit for performing statistical machine translation .
conjecture and empirically show that entailment graphs exhibit a β€œ tree-like ” property , i . e . , that they can be reduced into a structure similar to a directed forest .
we first identify that entailment graphs exhibit a β€œ tree-like ” property and are very similar to a novel type of graph termed forest-reducible graph .
we used 300-dimensional pre-trained glove word embeddings .
we use pre-trained vectors from glove for word-level embeddings .
automatic evaluation metrics , such as the bleu score , were crucial ingredients for the advances of machine translation technology in the last decade .
during the last decade , automatic evaluation metrics have helped researchers accelerate the pace at which they improve machine translation systems .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) .
twitter is a widely used microblogging platform , where users post and interact with messages , β€œ tweets ” .
we began our study by consulting the 51,558 parsed sentences of the wsj corpus .
we used sections 0 to 12 of the wsj part of the penn treebank with a total of 24,618 sentences for our experiments .
narayanan et al proposed a method for sentiment classification targeting conditional sentences .
narayanan et al discuss a pos-based approach for identifying conditional types for the task of sentiment analysis .
the nnlm weights are optimized as the other feature weights using minimum error rate training .
the feature weights are tuned with minimum error-rate training to optimise the character error rate of the output .
vectorial representations derived from large current events datasets such as google news have been shown to perform well on word similarity tasks .
vectorial representations of words derived from large current events datasets have been shown to perform well on word similarity tasks .
we used the phrase-based smt in moses 5 for the translation experiments .
we used the phrasebased translation system in moses 5 as a baseline smt system .
amr is a formalism of sentence semantic structure by directed , acyclic , and rooted graphs , in which semantic relations such as predicate-argument relations and noun-noun relations are expressed .
an amr is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them .
we use the sentiment pipeline of stanford corenlp to obtain this feature .
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp .
in this paper , i have demonstrated how to build an entailment system from mrs graph alignment , combined with heuristic β€œ robust ” .
in this paper , i examine the benefits and possible disadvantages of using rich semantic representations as the basis for entailment recognition .
that initialized em , improves parsing accuracy from 90 . 2 % to 91 . 8 % on english , and from 80 . 3 % to 84 . 5 % on german .
despite its simplicity , a product of eight automatically learned grammars improves parsing accuracy from 90.2 % to 91.8 % on english , and from 80.3 % to 84.5 % on german .
gulordava and baroni consider the identification of diachronic changes in meaning from an n-gram database , but in contrast to sagi et al and cook and stevenson , do not focus on specific types of semantic change .
gulordava and baroni identify diachronic sense change in an n-gram database , but using a model that is not restricted to any particular type of semantic change .
lmbr decoding can also be used as an effective framework for multiple lattice combination .
linearised lattice minimum bayes-risk decoding can also be used as an effective framework for multiple lattice combination .
the availability of a large typology database makes it possible to take computational approaches to this area of study .
fortunately , the publication of a large typology database made it possible to take computational approaches to this area of study .
with this method , the correlation rate reached 0 . 7667 , which represent the best score among the different submitted methods involved in the arabic monolingual sts task .
lim-lig system achieves a pearsons correlation of 0.74633 , ranking 2nd among all participants in the arabic monolingual pairs sts task organized within the semeval 2017 evaluation campaign .
to this end , we design novel features based on citation network information and use them in conjunction with traditional features for keyphrase extraction .
to this end , we design novel features for keyphrase extraction based on citation context information and use them in conjunction with traditional features in a supervised probabilistic framework .
we present a novel model of transliteration mining .
we presented a novel model to automatically mine transliteration pairs .
in this paper , we propose using a constrained word lattice , which encodes input phrases and tm constraints .
in this paper , we propose a constrained word lattice to combine smt and tm at phrase-level .
twitter is a microblogging site where people express themselves and react to content in real-time .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) .
active learning is a machine learning approach to achieving high-accuracy with a small amount of labels by letting the learning algorithm choose instances to be labeled .
active learning is a promising way for sentiment classification to reduce the annotation cost .
in this shared task , we employ the word embeddings model to reflect paradigmatic relationships between words .
we employ a neural method , specifically the continuous bag-of-words model to learn high-quality vector representations for words .
it is much more efficient than the viterbi algorithm when dealing with a large number of labels .
viterbi decoding is , however , prohibitively slow when the label set is large , because its time complexity is quadratic in the number of labels .
chung and gildea reported that the automatic insertion of empty categories improved the accuracy of phrased-based machine translation .
chung and gildea reported their recover of empty categories improved the accuracy of machine translation both in korean and in chinese .
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
we used the support vector machine implementation from the liblinear library on the test sets and report the results in table 4 .
we used the bleu score to evaluate the translation accuracy with and without the normalization .
we computed the translation accuracies using two metrics , bleu score , and lexical accuracy on a test set of 30 sentences .
for decoding , we used moses with the default options .
for the phrase based system , we use moses with its default settings .
in previous work , hatzivassiloglou and mckeown propose a method to identify the polarity of adjectives .
hatzivassiloglou and mckeown proposed a method for identifying word polarity of adjectives .
in this paper , we propose a novel framework , companion teaching , to include a human teacher in the dialogue policy training loop .
we propose a novel framework , companion teaching , to include a human teacher in the online dialogue policy training loop to address the cold start problem .
however , aspect extraction is a complex task that also requires fine-grained domain embeddings .
aspect extraction is a task to abstract the common properties of objects from corpora discussing them , such as reviews of products .
rozovskaya and roth further demonstrate that the models perform better when they use knowledge about error patterns of the non-native writers .
finally , rozovskaya and roth found that a classifier outperformed a language modeling approach on different data , making it unclear which approach is best .
experiments show that our model achieves state-of-the-art f-score .
results show that our model outperforms previous state-of-the-art systems .
question answering ( qa ) is a long-standing challenge in nlp , and the community has introduced several paradigms and datasets for the task over the past few years .
question answering ( qa ) is a challenging task that draws upon many aspects of nlp .
as a model learning method , we adopt the maximum entropy model learning method .
we use the mallet implementation of a maximum entropy classifier to construct our models .
among them , twitter is the most popular service by far due to its ease for real-time sharing of information .
twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them .
semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts .
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .
and consequently key phrases tend to have close semantics .
as a matter of fact , key phrases often have close semantics to title phrases .
the syntax-based statistical machine translation models use rules with hierarchical structures as translation knowledge , which can capture long-distance reorderings .
to solve this , syntax-based models take tree structures into consideration to learn translation patterns by using non-terminals for generalization .
in argument reconstruction , the induced roles largely correspond to roles defined in annotated resources .
when estimated jointly on unlabeled data , roles induced by the model mostly corresponds to roles defined in existing resources by annotators .
by integrating the two components into an existing amr parser , our parser is able to outperform state-of-the-art amr parsers .
we show integrating the two components into an existing amr parser results in consistently better performance over the state of the art on various datasets .
the language model is trained and applied with the srilm toolkit .
this means in practice that the language model was trained using the srilm toolkit .
it has been empirically shown that word embeddings could capture semantic and syntactic similarities between words .
these embeddings provide a nuanced representation of words that can capture various syntactic and semantic properties of natural language .
in phrase-based smt models , phrases are used as atomic units for translation .
in phrase-based smt , words may be grouped together to form so-called phrases .
phrase-based statistical machine translation models have achieved significant improvements in translation accuracy over the original ibm word-based model .
phrase-based statistical translation systems are currently providing excellent results in real machine translation tasks .
in a low-resource setting , we design a multitask learning approach that utilizes parallel data of a third language , called the pivot language .
we present a multi-task learning approach that jointly trains three word alignment models over disjoint bitexts of three languages : source , target and pivot .
this paper describes limsi ’ s submission to the conll 2017 ud shared task , which is focused on small treebanks , and how to improve low-resourced parsing .
this paper describes limsi ’ s submission to the conll 2017 ud shared task ( cite-p-20-3-5 ) , dedicated to parsing universal dependencies ( cite-p-20-1-10 ) on a wide array of languages .
we estimated unfiltered 5-gram language models using lmplz and loaded them with kenlm .
for all systems , we trained a 6-gram language model smoothed with modified kneser-ney smoothing using kenlm .
we follow a previous attempt to use a sequence-to-sequence learning model augmented with the attention mechanism .
following previous work , we believe that using sequential information rather than a bag-of-words model would help improve performance .