id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21300
In statistical machine translation (SMT), extending the generative noisy-channel formulation (Brown et al., 1993) as a discriminative, log-linear combination of multiple models (Och, 2003) has become the state of the art.
most of the component models are still estimated by heuristics or generative training.
contrasting
train_21301
The Direct Translation Model 2 introduced by Ittycheriah and Roukos (2007) is similar in that it also trains millions of features on the training data.
the weights are estimated based on a maximum entropy model and the underlying translation paradigm differs from the standard phrasebased model.
contrasting
train_21302
2 For rapid experimentation, the translation model is trained on the in-domain TED portion of the bilingual data, which is also used for maximum expected BLEU training.
we use a large 4-gram LM with modified Kneser-Ney smoothing (Kneser and Ney, 1995;Chen and Goodman, 1998), trained with the SRILM toolkit (Stolcke, 2002).
contrasting
train_21303
Previous work attacked this problem by inducing new translation rules from monolingual data with a semi-supervised algorithm.
this approach does not scale very well since it is very computationally expensive to generate new translation rules for only a few thousand sentences.
contrasting
train_21304
Specifically, for each unlabeled source phrase f , we learn a mapping W f ∈ R d×d based on the translations of m of f 's labeled neighbors: Compared to the global projection, we require an additional k-NN query to find the labeled neighbors for each unlabeled source phrase.
this extra computation takes only a negligible amount of time, since the number of labeled phrases on the source side is significantly smaller than the number of phrases on the target side.
contrasting
train_21305
Even after this simplification the running time of the SLP is vastly dominated by gathering similarity statistics and by constructing the resulting graph.
once the PMI statistics are collected and the graph is constructed, actual label propagation is very fast.
contrasting
train_21306
A fast retrieval of those colliding phrases can be done via a hash table.
since the projection is random, it is very likely that true neighbors in the d-dimensional space fall into different bins after projection (false negatives; e.g., p 1 and p 4 in Figure 3 (a)).
contrasting
train_21307
For English, we collect unigrams and bigrams from the monolingual data instead.
the English monolingual corpus is much larger than the tuning and testing sets for Arabic and Urdu.
contrasting
train_21308
As detailed in §4.1 below, the conventional BIO scheme used in existing supersense taggers is capable of representing most of these.
it does not allow for gappy (discontinuous) uses of an expression, such as track people down.
contrasting
train_21309
Until now the sentiment analysis has been primarily done in a bottom-up way, starting with the classification of lexical items, then resolving the polarity of the sentence, then using discourse information to improve the lexical classification.
lexical classifiers so far produce results that are too unreliable to become a basis of a discourse-level classification.
contrasting
train_21310
BayesCat induces categories, which are represented through a distribution over target concepts, and a distribution over features (i.e., individual context words).
to BCF, it does not learn types of features.
contrasting
train_21311
Previous work has found this framework effective for machine translation (MT), making it possible to train better translation models with less effort, particularly when annotators translate short phrases instead of full sentences.
previous methods for phrase-based active learning in MT fail to consider whether the selected units are coherent and easy for human translators to translate, and also have problems with selecting redundant phrases with similar content.
contrasting
train_21312
For example, there are methods to select sentences that contain phrases that are frequent in monolingual data but not in bilingual data (Eck et al., 2005), have low confidence according to the MT system , or are predicted to be poor translations by an MT quality estimation system (Ananthakrishnan et al., 2010).
while the selected sentences may contain useful phrases, they will also generally contain many already covered phrases that nonetheless cost time and money to translate.
contrasting
train_21313
Bloodgood and Callison-Burch (2010) demonstrate results of a simulation showing that this method required less than 80% of the data required by randomly selected sentences to obtain the same accuracy.
as mentioned in the introduction, the selected full sentences include many phrases already covered in the parallel data.
contrasting
train_21314
Bloodgood and Callison-Burch (2010) showed that by translating the phrases selected by this method using a crowdsourcing website, it was possible to achieve a large improvement of BLEU score, outperforming similar sentence-based methods.
as mentioned in the introduction, this method has several issues.
contrasting
train_21315
Second, maximal phrases and their occurrence counts can be enumerated efficiently by using enhanced suffix arrays (Kasai et al., 2001) in linear time with respect to document length, removing the need to set arbitrary limits on the length of strings such as n = 4 used in previous work.
it can be easily noticed that while in the previous example p 2 is included in p 3 , their occurrence counts are close but not equivalent, and thus both are maximal phrases.
contrasting
train_21316
We used the same dataset as the English-Japanese translation task and the same tools in the simulation experiment (Section 5).
for training target language models, we interpolated one trained with the base data and a second trained with collected data by using SRILM (Stolcke, 2002) because the hand-made data set was too small to train a full language model using only this data.
contrasting
train_21317
The confidence level of single words in the proposed method is lower than in the baseline method, likely because the baseline selected a smaller amount of single words, and those se- Table 6: Average confidence level of manual translation corresponding to phrase length lected were less likely to be technical terms.
we can confirm that the confidence level for longer phrases in the baseline method decreases drastically, while it is stably high in our method, confirming the effectiveness of selecting syntactically coherent phrases.
contrasting
train_21318
We suspect that this is because this headline contains three keywords "syria", "civil", "war", and also the key date information: the model was trained partly on the Libya war timeline, and therefore many features and parameters were activated in the matrix factorization framework to give a high recommendation in this testing scenario.
when evaluating the output of the joint text and vision system, we see that this error is eliminated: the selected sentence on 2011-12-02 is "Eleven killed after weekly prayers in Syria on eve of Arab League deadline".
contrasting
train_21319
for an author to annotate them.
the seed nouns are noisier due to noise in LDA.
contrasting
train_21320
If a large amount of target domain data is available, training everything from scratch (scrALL) achieves a very good performance and adaptation is not necessary.
if only a limited amount of in-domain data is available, efficient adaptation is critical (DT-10% & ML-10% > scr-10%).
contrasting
train_21321
13 We find that conversation flow features obtain the best accuracy among all listed feature types (Flow: 63%; Flow*: 65%), performing significantly higher than a 50% random baseline (binomial test p < 0.05), and comparable to audience features (60%).
the length and BOW baselines do not perform better than chance.
contrasting
train_21322
SRL captures important semantic representations for actions associated with verbs, which have shown beneficial for a variety of applications such as information extraction (Emanuele et al., 2013) and question answering (Shen and Lapata, 2007).
the traditional SRL is not targeted to represent verb semantics that are grounded to the physical world so that artificial agents can truly understand the ongoing activities and (learn to) perform the specified actions.
contrasting
train_21323
The same situation happens to the location role as most of the locations happen near the sink when the verb is wash, and near the cutting board for verbs like cut, etc.
for the patient role, there is a large difference between our approach and baseline approaches as there is a larger variation of different types of objects that can participate in the role for a given verb.
contrasting
train_21324
Yet, several experiments indicated that perceptual properties of concepts, such as concreteness and imageability, are important features for metaphor identification (Turney et al., 2011;Neuman et al., 2013;Gandy et al., 2013;Strzalkowski et al., 2013;Tsvetkov et al., 2014).
all of these methods used manually-annotated linguistic resources to determine these properties (such as the MRC concreteness database (Wilson, 1988)).
contrasting
train_21325
This suggests that linguistic word embeddings already successfully capture domain and compositional information necessary for metaphor identification.
the visual PHRASECOS1 model, when applied in isolation, tends to outperform the visual WORDCOS model.
contrasting
train_21326
We thus expect that using video data along with the images as input to the acquisition of visual embeddings is likely to improve metaphor identification performance for verbal metaphors.
we leave the investigation of this issue for future work.
contrasting
train_21327
The performance of our method on the same dataset is a little lower than that of Tsvetkov et al.
we do not use any hand-annotated resources and acquire linguistic, domain and perceptual information in the data-driven way.
contrasting
train_21328
Some notable neural network based approaches here include the works of (Klementiev et al., 2012;Zou et al., 2013;Mikolov et al., 2013;Hermann and Blunsom, 2014b;Hermann and Blunsom, 2014a;Chandar et al., 2014;Soyer et al., 2015;Gouws et al., 2015).
except for (Hermann and Blunsom, 2014a; Hermann and Blunsom, 2014b), none of these other works handle the case when parallel data is not available between all languages.
contrasting
train_21329
None of these captions actually correspond to the image as per our parallel image caption test set.
clearly the first, third and fourth caption are semantically very relevant to this image as all of them talk about baseball.
contrasting
train_21330
We also observed that adding visual features to textual features improves performance in some cases: multimodal features perform better than textual features alone both for object labels (CNN+O) and for image descriptions (CNN+C).
adding CNN features to textual features based on object labels and descriptions together (CNN+O+C) resulted in a small decrease in performance.
contrasting
train_21331
Common sense knowledge has been predominately created directly from human input or extracted from text (Lenat et al., 1990;Liu and Singh, 2004;Carlson et al., 2010).
our work is focused on visual common sense extracted from images anno- The angle between the centroid of a and the centroid of b lies between 315° and 45°, or 135° and 225°.
contrasting
train_21332
It may be possible to write GPU-specific code that maintains the entire parse state on GPU, but we are not aware of any such implementations.
our supertagger only uses matrix operations, and does not take any parse state as inputmeaning it is straightforward to run on a GPU.
contrasting
train_21333
These models are 0.3 and 1.5 F1 more accurate than the C&C baseline respectively, which is well within the margin of improvement obtained by our model.
to standard parsing algorithms, the efficiency of our model depends directly on the accuracy of the supertagger in guiding the search.
contrasting
train_21334
Recently, bi-LSTMs have achieved high accuracies in a simpler sequence labeling task: partof-speech tagging (Wang et al., 2015;Ling et al., 2015) on the Penn treebank, with small improvements over local models.
we achieve strong accuracies compared to (Wang et al., 2015) using feed-forward neural network model trained on local context, showing that this task does not require bi-LSTMs.
contrasting
train_21335
It uses word segmentation first and feeds the segmented words to subsequent task(s), named as Word Unit.
this method suffers from the error propagation problem since an incorrect word segmentation would cause an error in the subsequent task.
contrasting
train_21336
If an error results in a word crossing the boundary of semantic slots, it will definitely lead to an error in SLU semantic slot filling.
when supplying the automatic 'BIES' ngrams from CWS to SLU semantic slot filling (As Features), we observe a nice gain in both cases, 94.41% for CTB6 and 94.13% for PKU.
contrasting
train_21337
For the sentence, the baseline system extracts '湖南财' as a location name.
the word segmentation separates the words '湖 南' (Hunan) and '财政' (Finance), which reduces the probability score of '湖南财' being a slot value because it crosses word boundaries.
contrasting
train_21338
When CWS accuracy is high on the training data, the NER model trained with such data puts more weight on word segmentation features rather than character features.
during testing, the performance of CWS drops, resulting in more word segmentation errors, with a high chance to propagate to NER errors; even worse, a lot of these CWS errors are around NERs since a lot of NERs are OOVs and thus are challenging to segment correctly.
contrasting
train_21339
However, most of previous work relied on substantial amount of resources such as language-specific rules, basic tools such as part-of-speech taggers, a large amount of labeled data, or a huge amount of Web ngram data, which are usually unavailable for low-resource ILs.
in this paper we put the name tagging task in a new emergent setting where we need to process a surprise IL within very short time using very few resources.
contrasting
train_21340
Coupled with embedding approaches (Mikolov et al., 2013;Le and Mikolov, 2014;Pennington et al., 2014), neural networks can find the optimal feature combinations using techniques such as random weight initialization and back-propagation, and have established the new state-of-the-art for several tasks (Socher et al., 2013;Devlin et al., 2014;Yu et al., 2014).
neural networks are not as good at optimizing combinations between sparse features, which are still the most dominating factors in natural language processing.
contrasting
train_21341
Each high dimensional feature in F is induced for making classification between two labels, y andŷ, but it may or may not be helpful for distinguishing labels other than those two.
our algorithm can be modified to learn the weights of the induced features only for their relevant labels by adding the label information to F, which would change the line 13 in Algorithm 1 as follows: introducing features targeting specific label pairs potentially confuses the classifier, especially when they are trained with the low dimensional features targeting all labels.
contrasting
train_21342
One could collect large gazetteers from knowledge graph and phrase embeddings to obtain high coverage of gazetteers.
large gazetteers cause a side-effect called "feature under-training", where the gazetteer features overwhelm the context features.
contrasting
train_21343
So far we have described a model for learning structures for a single event.
the inference of the event types for individual events may depend on other events that are mentioned in the document.
contrasting
train_21344
As pipeline approaches suffer from error propagation, researchers have proposed methods for joint extraction of event triggers and arguments, using either structured perceptron (Li et al., 2013), Markov Logic (Poon and Vanderwende, 2010), or dependency parsing algorithms (McClosky et al., 2011).
existing joint models largely rely on heuristic search to aggressively shrink the search space.
contrasting
train_21345
Our work is similar to Riedel and McCallum (2011).
there are two main differences: first, our model extracts both event mentions and entity mentions; second, it performs joint inference across sentence boundaries.
contrasting
train_21346
In the pipelined approach, it is often simple for the argument classifiers to realize that camera-man is the Target argument of the Die event due to the proximity between cameraman and died in the sentence.
as cameraman is far away from fired, the argument classifiers in the pipelined approach might fail to recognize cameraman as the Target argument for the event Attack with their local features.
contrasting
train_21347
Recurrent Neural Networks (RNNs) have shown impressive performances on many sequential modeling tasks due to their ability to encode unbounded input histories.
training simple RNNs is difficult because of the vanishing and exploding gradient problems (Bengio et al., 1994;Pascanu et al., 2013).
contrasting
train_21348
Again, in many cases attention weights concentrate around the last word (bottom row).
we observe that many long distance words also receive noticeable attention mass.
contrasting
train_21349
Because they are discriminatively trained, these meth-ods can learn representations that yield very accurate predictive models (e.g., .
in comparison with the probabilistic graphical models that were previously the dominant machine learning approach for NLP, neural architectures lack flexibility.
contrasting
train_21350
• If the model is trained to maximize the joint likelihood of the discourse relations and the text, it is possible to marginalize over discourse relations at test time, outperforming language models that do not account for discourse structure.
to recent work on continuous latent variables in recurrent neural networks (Chung et al., 2015), which require complex variational autoencoders to represent uncertainty over the latent variables, our model is simple to implement and train, requiring only minimal modifications to existing recurrent neural network architectures that are implemented in commonly-used toolkits such as Theano, Torch, and CNN.
contrasting
train_21351
Also, these prior models employ continuous latent variables, requiring complex inference techniques such as variational autoencoders (Kingma and Welling, 2014;Burda et al., 2016;Chung et al., 2015).
the discrete latent variables in our model are easy to sum and maximize over.
contrasting
train_21352
They can vary in terms of the number of words that contain a given phonestheme, their frequencies, the strength of their association with the core meaning of a phonestheme (for example measured as an average of human ratings for all the words that comprise the given phonesthemic cluster) and the regularity of that association (what proportion of words in the whole cluster are highly related to the predicted meaning).
psycholinguistic data is to some extent ambiguous on how these features of phonesthemes affect their productivity, learnability and their effect on language processing, which could partly be due the methods being employed.
contrasting
train_21353
Both studies found support for a sizable proportion of phonetic units tested.
it could still be questioned whether the comparison was sufficiently strict, given that sets of random words which do not overlap in form have a priori lower chance of being semantically related than sets of words that share a phonestheme.
contrasting
train_21354
The fact that we obtain significant results indicates that our generated labels are meaningful not only according to automatic evaluation measures but also in terms of what speakers can perceive.
the pattern of which phonesthemic labels receive better human judgments is somewhat less clear.
contrasting
train_21355
One possible way to identify this relevant nugget is to apply sentence compression techniques (McDonald, 2006;Siddharthan, 2011;Štajner et al., 2013;Filippova et al., 2015;Filippova and Strube, 2008;Narayan and Gardent, 2014;Cohn and Lapata, 2009).
all these methods have been developed for standard texts with complete sentences, and it is not clear whether they are suited to dictionary definitions.
contrasting
train_21356
For example, (Xie et al., 2013) has proposed to use semantic frame parsers to generalize from sentences to scenarios to detect the roles of specific companies (positive or negative), where support vector machines with tree kernels are used as predictive models.
(Ding et al., 2014) has proposed to use various lexical and syntactic constraints to extract event features for stock forecasting, where they have investigated both linear classifiers and deep neural networks as predictive models.
contrasting
train_21357
NMTbased systems thus may help ameliorate the lack of large error-annotated learner corpora for GEC.
nMT models typically limit vocabulary size on both source and target sides due to the complexity of training (Sutskever et al., 2014;Luong et al., 2015;Jean et al., 2015).
contrasting
train_21358
Our work is also related to research on reference resolution in dialogue systems, such as Kennington and Schlangen (2015).
unlike Kennington and Schlangen, who explicitly train an object recognizer associated with each word of interest, with at least 65 labeled positive training examples per word, our model does not have any comparable form of supervision and our data exhibits much lower frequencies of object and word (co-)occurrence.
contrasting
train_21359
To use maximum likelihood or minimum cross-entropy, it assumes that the model distribution is peaked.
especially in natural language processing where the ambiguity is ubiquitous, this assumption does not hold.
contrasting
train_21360
Such work typically trains semi-supervised classifiers to determine events of interest due to the limitation of annotated data.
a few studies devote to open domain event extraction (Benson et al., 2011;Ritter et al., 2012;Petrović et al., 2010;Diao et al., 2012;Chierichetti et al., 2014;Li et al., 2014;Qiu and Zhang, 2014), in which an event category is not predefined, and clustering models are applied to automatically induce event types.
contrasting
train_21361
2015, using a set of seed events and large raw tweets for ER.
we take a fully-automated approach to find seed events, since manual listing of seed DDoS events can be a costly and time consuming process, and requires a certain level of expert knowledge.
contrasting
train_21362
From the curves we can see that the sparse representation is comparatively less efficient in picking out negative examples, since at a lower recall the model does not gain a higher precision.
lSTM-based representation demonstrates a better trade-off between recall and precision.
contrasting
train_21363
They used the techniques for traditional SMT models, under the IBM framework (Watanabe and Sumita, 2002) or the feature-driven linear models (Finch and Sumita, 2009;Zhang et al., 2013).
the target-bidirectional techniques we have developed for the unified neural network framework, target a pressing need directly motivated by a fundamental issue suffered by recurrent neural networks.
contrasting
train_21364
The remaining 14 games end before they should because the judge bot breaks down.
the players do reveal their own roles after the game ends.
contrasting
train_21365
We also considered the number of verbs, based on our intuition that heavy use of verbs can be associated to motion.
we don't expect the number of motion words to be as important in our domain.
contrasting
train_21366
A measure of language complexity is the type-token ratio (TTR).
hesitations (um, er, uh), hedges (sort of, kind of, almost), and polite forms are markers of powerless language (Sparks and Areni, 2008).
contrasting
train_21367
We did not consider word sequence, player interaction, individual characteristics of players, or non-literal meaning.
the data set is too small for any more complex models.
contrasting
train_21368
The most extensively developed resource for English is the MRC Psycholinguistic Database (Section 2).
it is far from complete, most likely due to the inherent cost of manually entering such properties.
contrasting
train_21369
Meanwhile, some improved linear analysis methods were proposed for encoding documents with a reliable similarity information (Yih et al., 2011;Chang et al., 2013).
all those works for document representation paid little attention to the variability of intra-topic documents.
contrasting
train_21370
For the generative story of our new model, we again begin by generating pairs ( i , c i ).
instead of typesetting c i , we generate a distinct glyph character g i as its replacement, according to Orthographic substitution patterns are language-specific, and thus P GLYPH is as well.
contrasting
train_21371
As a second baseline, we compare to our previous work, which improved Ocular's diplomatic transcription accuracy by introducing orthographic variation directly into the LM with hand-constructed language-specific orthographic rules to rewrite the LM training data prior to n-gram estimation (Garrette et al., 2015).
this rule-based preprocessing approach is inadequate in many ways.
contrasting
train_21372
Traditionally, bag of words representation of surrounding context has shown reasonably good performance.
the information contained in the bag of words vector is very sensitive to context window size.
contrasting
train_21373
Usually a way to tackle this problem is to try different context window sizes and use the one that gives the highest validation performance.
this method cannot be easily applied to our task, because different medical events like medication, diagnosis or adverse drug reaction require different context window sizes.
contrasting
train_21374
In this sentence, the true labels are Adverse Drug Event(ADE) for "bronchiolitis obliterans" and Drugname for "ABVD chemo".
the ADE , "bronchiolitis obliterans" could be misslabeled as just another disease or symptom, if the entire sentence is not taken into context.
contrasting
train_21375
It is possible that with a larger dataset, LSTM might perform comparable to or better than GRU.
our experiments with reducing the hidden layer size of LSTM-document model to control for the number of trainable parameters did not produce any significant improvements.
contrasting
train_21376
Moreover, figure 4 seems to indicate that there is not much difference between the performances of LSTM and GRU with different data sizes.
it is clearly surprising that RNN models with a larger number of parameters can still perform better than CRF models on smaller dataset sizes.
contrasting
train_21377
RNNs are excellent in extracting relevant patterns from sequence data.
they do not explicitly enforce constraints or dependencies over the output labels.
contrasting
train_21378
Given the bag of words {John, loves, Mary}, the action SHIFT-John-NNP is still different from the action SHIFT-Mary-NNP.
the action component of the features becomes SHIFT only, and the words John/ Mary must be used as lookahead configuration features for their disambiguation.
contrasting
train_21379
State-of-the-art word embeddings, which are often trained on bag-of-words (BOW) contexts, provide a high quality representation of aspects of the semantics of nouns.
their quality decreases substantially for the task of verb similarity prediction.
contrasting
train_21380
Nonetheless, the human judgment scores in these datasets reflect relatedness between words.
the recent SimLex999 dataset (Hill et al., 2014) contains word similarity scores for nouns (666 pairs), verbs (222 pairs) and adjectives (111 pairs).
contrasting
train_21381
Though with the help of GPU, BLSTM-RNN is still slower than the other methods.
it should be noted that the speed of our approach is acceptable compared with previous neural network language model based methods, including (Bengio et al., 2003;Mikolov et al., 2010;Mnih and Hinton, 2007), as our model uses a much simpler output layer which only has two nodes, avoiding the time consuming computation of the big softmax output layer in language model.
contrasting
train_21382
We achieve an F1 score of 73.4 with them.
the difference to using position embeddings with entity flags is not statistically significant.
contrasting
train_21383
Multitask learning has also been used for other classification and regression tasks in language processing, mostly for domain adaptation (Daume III, 2007;Finkel and Manning, 2009), but also more recently for tasks such as multi-emotion analysis (Beck et al., 2014), where the each emotion explaining a text is defined as a task.
in all previous work the focus has been on addressing task variance coupled with data scarcity, which makes them different from the work we describe in this paper.
contrasting
train_21384
Previous work in the area have proposed a number of methods for identifying and extracting task knowledge from search query sessions (Mehrotra and Yilmaz, 2015b;Wang et al., 2013;Lucchese et al., 2011;Verma and Yilmaz, 2014;Mehrotra and Yilmaz, 2015a).
while some tasks are fairly trivial and single-shot (e.g.
contrasting
train_21385
(Jones and Klinkner, 2008) was the first work to consider the notion that there may be multiple sub-tasks associated with a user's informational needs.
they fall short of proposing a method to identify a task from queries.
contrasting
train_21386
The probability of a question given a label under this model is: Theoretically, we could learn both of the production rule distributions that compose P (w, t|L; θ) in this formulation.
in practice, the large number of nonterminals makes it challenging to learn a conditional probability table for the binary production rules.
contrasting
train_21387
phenomenon found that reducing the length of the templates made it even more difficult for these models to find correct parses for long questions.
to these baselines, our models do not suffer from either of these problems because the logical form derivation grammar restricts the search to correct derivations and our generative objective prefers frequently-occurring lexicon entries.
contrasting
train_21388
ASR (Adda-Decker and Adda, 2000), MT (Koehn and Knight, 2003) or IR (Monz and de Rijke, 2001), and is generally perceived as a crucial component for the processing of respective languages.
most existing systems rely on dictionaries or are trained in a supervised fashion.
contrasting
train_21389
The suffix-prefix-based approach results to Bundes-finanz-ministerium and the prefixsuffix method to Bund-esfinanz-ministerium.
for some words, the prefix-suffix generates the correct compound split, e.g.
contrasting
train_21390
using the similar candidate units first and only apply the other candidate sets if no split was found.
preliminary experiments revealed that it was always beneficial to generate splits based on all three candidate sets and use the geometric mean scoring as outlined above to select the best split as decomposition of a word.
contrasting
train_21391
This is caused by many short split candidates that are not detected due to the ml parameter.
our method still beats the KK baseline significantly.
contrasting
train_21392
Note that Dijkstra's algorithm is exact; no beam search is required as in some neural sequence models.
,ŷ may not be the most probable string-extracting that from a weighted FST is NP-hard (Casacuberta and de la Higuera, 1999).
contrasting
train_21393
As an alternative, machine learning models have been proposed to generate inflections from root forms as string transduction (Yarowsky and Wicentowski, 2000;Wicentowski, 2004;Dreyer and Eisner, 2011;Durrett and DeNero, 2013;Ahlberg et al., 2014;Hulden, 2014;Ahlberg et al., 2015;Nicolai et al., 2015).
these impose either assumptions about the set of possible morphological processes inflection generation kalb kälber case=nominative number=plural (e.g.
contrasting
train_21394
Our model predicts the sequence of characters in the inflected string given the characters in the root word (input).
our problem differs from the above setting in two ways: (1) the input and output character sequences are mostly similar except for the inflections; (2) the input and output character sequences have different semantics.
contrasting
train_21395
In an error analysis, it turned out that FF2010 (i.e., SMOR) cannot process 2% of the gold samples.
on a common processable test set, we still find our system to outperform FF2010 significantly, which indicates that the difference in performance is not just a matter of coverage.
contrasting
train_21396
Pun generation is much more complicated than target recovery as reflected in the complexity of proposed systems for humor generation.
improved understanding of puns by way of progress in the target recovery task should also lead to corresponding improvements in the task of pun generation.
contrasting
train_21397
For example, perturbed and perturbs are inflections of the verb perturb.
derivation modifies words more drastically-often changing the meaning or POS.
contrasting
train_21398
The standard measure for the supervised task is border F 1 , which measures how often the segmentation boundaries posited by the model are correct.
this measure assumes that the concatenation of the segments is identical to the input string (i.e., surface segmentation) and is thus not applicable to canonical segmentation.
contrasting
train_21399
However, this measure assumes that the concatenation of the segments is identical to the input string (i.e., surface segmentation) and is thus not applicable to canonical segmentation.
the Morpho Challenge competition (Kurimo et al., 2010) uses a measure that samples a large number of word pairs from a linguistic gold standard.
contrasting