id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_97400
Furthermore, a news article is considered large if it has more than 300 tokens, which corresponds to the average number of words per article in our training set.
we decide to use as metrics ROUGE-1, ROUGE-2, and ROUGE-SU.
neutral
train_97401
A potential confound for using this analysis as a proxy for the quality of the alignment model is that the ASR transcript is generally an ungrammatical sentence fragment as opposed to the grammatical recipe steps, which is likely to reduce the raters' approval of ASR captions in the case when both accurately describe the scene.
we cannot use keyword spotting if the goal is to align instructional text to videos.
neutral
train_97402
For each recipe, we apply a suite of in-house NLP tools, similar to the Stanford Core NLP pipeline.
there are several pieces of related work.
neutral
train_97403
To partially combat this problem, we used computer vision to refine candidate video segments as follows.
we first align the instructional steps to the speech signal using an HMM, and then refine this alignment by using a state of the art computer vision system.
neutral
train_97404
There is by now a large literature on multimodal distributional semantic models.
we thank Adam Liska, Tomas Mikolov, the reviewers and the NIPS 2014 Learning Semantics audience.
neutral
train_97405
We iteratively scan through our dataset, one protocol and video pair (x i , y i ) at a time.
the highest weighted features include: (write, pen), (aspirate, pipette), which agree with our intuition.
neutral
train_97406
The feature weights learned by LSSVM and its variants were smaller than that for LSP (due to regularization).
the feature weights learned by LSSVM and its variants were smaller than that for LSP (due to regularization).
neutral
train_97407
By constraining the alignment to the forced alignment, we avoid aggressive updates, which may have helped LSP-C and LSSVM-C to learn better alignments.
for LCRf, we sum over all the latent variables for estimating the expectations.
neutral
train_97408
The elements of U may be overlapping.
our goal, then, is to set θ so that the predicted label distribution matchesp j , for all j.
neutral
train_97409
First, we consider c and m as a single sentence and compute a single bag-of-words representation b cm ∈ R V .
for the translation probabilities, we built a very large phrase table of 160.7 million entries by first filtering out Twitterisms (e.g., long sequences of vowels, hashtags), and then selecting candidate phrase pairs using fisher's exact test (Ritter et al., 2011).
neutral
train_97410
Table 6 provides examples of system output.
• Portmanteau building occurs at the phoneme level.
neutral
train_97411
Both systems are rule-based, rather than data-driven, and do not train or test their systems with real-world portmanteaux.
we also plan to research other applications for multi-input/output models.
neutral
train_97412
Most disagreements between annotators are confusions between 'partial' and 'good partial' matches.
applying our method on a large dataset yields high quality sentence alignments that would benefit data-driven learning in text simplification.
neutral
train_97413
In this paper, we propose to disambiguate NEs using a Personalized PageRank (PPR)-based random walk algorithm.
the coherence of the node e to the graph G quantifies how well node e "fits" into this graph.
neutral
train_97414
We represent possible EC positions using the word embeddings of their contexts and then map them to a low dimension space for EC detection.
the problem of EC detection can be formulated as a classification problem: for each "head word, following word" pair, what is the type of the EC?
neutral
train_97415
Incremental RR has also been studied in a number of papers, including a framework for fast incremental interpretation (Schuler et al., 2009), a Bayesian filtering model approach that was sensitive to disfluencies , a model that used Markov Logic Networks to resolve objects on a screen , a model of RR and incremental feedback (Traum et al., 2012), and an approach that used a semantic representation to refer to objects (Peldszus et al., 2012;Kennington et al., 2014).
(2011) used 14 task-specific features, three of which they found to be the most informative in their model.
neutral
train_97416
We modeled the structured data extraction task as text categorization and NER tasks and applied machine learning (SVM) on the automatically generated training datasets.
a listing can have one or more of the following property types: retail, office, industrial, land, multi-family.
neutral
train_97417
Lastly, it should be noted that an overall system performance baseline is one that measures the average performance of data entry staff in commercial real estate listing services.
the construction of listing data (for comparison with manually entered data) resulted in a strict performance measure.
neutral
train_97418
Figure 6 shows that working with the smaller, context-specific sets dramatically decreases the model's ability to recover deleted segments.
in all cases, the model recovers far more underlying forms than it finds nonzero weights.
neutral
train_97419
In addition, if u ∈ S and some phonological alternation p ∈ P maps u to a surface form s ∈ p(u) ∈ S, then (s, u) ∈ X .
we used the Buckeye underlying forms as our underlying forms.
neutral
train_97420
We present results suggesting that these constraints simplify the search problem that the learner faces.
we found that the choice of L 1 versus L 2 regression makes little difference, and the model is insensitive to the value of the regulariser constant λ (we set to λ = 1 in the experiments below).
neutral
train_97421
Hence, adding continuous-space representations of words can provide valuable information to the classifier and the classifier can learn better discriminative criteria based on such information.
the manually annotated instances (CC) include samples with this tag and therefore IMS + CC is able to associate a target word with this sense tag.
neutral
train_97422
The alternations of each feature value can be straightforwardly interpreted as the birth and death (Le Quesne, 1974) of a lexical item.
then the posterior hyperparameters are α n = α+n/2 and the posterior predictive distribution is Student's t-distribution (Murphy, 2007): , where M hist is a collection of α, β and a history of previously observed differences.
neutral
train_97423
(2008) proposed a generative model for hierarchical clustering, which straightforwardly explains evolutionary history.
the acceptance probability is where children(h ′ ) is the set of the target node's children.
neutral
train_97424
We also conducted experiments on the test set by replacing the parsed graph with gold 1 A script to create the train/dev/test partitions is available at the following URL: http://goo.gl/vA32iI 2 Specifically we used CoreNLP toolkit v3.3.1 and parser model wsjPCFG.ser.gz trained on the WSJ treebank sections 02-21. relation labels or/and gold concept labels.
since this action will be applied to every node which is kept in the final parsed graph, concept labeling could be done simultaneously through this action.
neutral
train_97425
See Gritzmann and Sturmfels (1992) Each vertex h i corresponds to a single index vector i, which itself corresponds to a single set of selected hypotheses.
the corresponding decision boundaries in their normal fans have also been drawn with dashed lines.
neutral
train_97426
As already mentioned above, the size of the training corpus strongly affects the results.
following the idea presented in (Ballesteros et al., 2014b), a separate SVM-classifier is defined for the mapping of each linguistic category.
neutral
train_97427
In particular, as one ages they think less about the immediate present and more about the future (Friedman, 2000;Nurmi, 2005;Steinberg et al., 2009), and females tend to think a bit more about the future than males (Keough et al., 1999).
features 3 available here: wwbp.org/public data/happierfuntokenizing.zip are encoded simply as binary indicators for whether the ngram appears in the message.
neutral
train_97428
Other studies have established consistent links between temporal orientation and demographic characteristics.
unlike other areas of natural language processing where stochastic techniques dominate, rule-based systems have been quite competitive in time expressions recognition, especially in less domain dependent settings or for relaxed matching tasks (uzZaman et al., 2013).
neutral
train_97429
We controlled for individual differences and date effects (e.g.
self report questionnaires are often used for convenience, not necessarily because they are most valid (Paulhus and Vazire, 2007).
neutral
train_97430
Entities were labeled as one of three classes (person, location, or organization), and two entities were only considered a match if they both selected the same entity and the same entity class.
in general, the results suggest that a complex task such as dependency parsing suffers substantially when the input data differs from formal text in any number of ways.
neutral
train_97431
Some work has chosen to focus on specific aspects of the normalization process, such as providing good coverage (Liu et al., 2012) or building normalization dictionaries (Han et al., 2012).
the per-token error rate highlights the cost of failing to perform a single instance of a given normalization edit, independent of the frequency of the edit.
neutral
train_97432
She is trying to keep gay people out of marriage and thus preserve her heterosexual privledge.
are you saying you would be in favor of foregoing aLL the legal rights and benefits you are afforded by marriage?
neutral
train_97433
Both Arg1 and Arg2 in Row 10 makes the same argument but Arg1 includes additional argumentation.
our subsequent impression was that the clustering had not filtered out enough of the unrelated pairs (score 0-1).
neutral
train_97434
We used 10, 875, 982 freely available abstracts (not full text articles) from PubMed as our corpus.
they re- port a best F1 score of 48% but note that such score does not seem to reflect the quality of the clustering.
neutral
train_97435
In order to evaluate the automatic clustering procedure that uses K-means++ and word vectors, we start with the gold standard provided by de Melo and Bansal (2013): as mentioned above, their data set has 88 gold standard clusters, corresponding to 346 adjectives, annotated by humans for scale ordering.
this experiment uses hand-corrected WordNet dumbbells to determine adjectives on the same scale of semantic intensity, followed by the MILP using strength counts from the Google N-gram corpus, to determine the ranking.
neutral
train_97436
We used CBOW vectors to perform clustering and derived k = 300 and d = 250 using the approach described in Section 5.
the skip-gram model predicts the neighboring words given the current word.
neutral
train_97437
Despite this, their unsupervised model only agrees with their supervised model on 55% of zero pronoun antecedents, suggesting that this hypothesis is weak.
figure 1 shows an example conversation in which zero pronouns are frequently used to refer to speaker or listener, and would be translated to English as "I" or "you."
neutral
train_97438
First, we describe the data we use for experimentation.
at that point, the focus remains on S for several utterances until "The last round.
neutral
train_97439
, c K }, automatically generate a visual paraphrase (i i , c i , p i ) for each (i i , c i ); then rerank the candidate captions by the following affinity function that merges the visual neighborhood from the paraphrase, 6 Experiments: Visual Paraphrasing Improves Image Captioning The experimental configuration basically follows §4.
as a reference, the first row shows the performance of the INSTaNCE method ( §4).
neutral
train_97440
The results show that the global training objective achieves best scores on both MAP and GAP for classifiers and lowdimensional embedding models.
one of the main drawbacks in existing KBs is that they are incomplete and are missing important facts (West et al., 2014), jeopardizing their usefulness in downstream tasks such as question answering.
neutral
train_97441
We find that two of these are best suited for robust tagging.
we conclude that LM-based representations are more suited for tagging as they can be induced faster, are smaller and give better results.
neutral
train_97442
PDT and MTE have been annotated using two different guidelines that without further annotation effort could only be merged by reducing them to a common subset.
11 We conclude that systems based on the algorithm of Martin et al.
neutral
train_97443
Nor is the device necessary to represent the pronunciation of the preceding vowel; for example, SoundSpel has those words as 'maek' and 'maeking'.
for example, if the stem ends in a tt and the affix begins with an i, the consonant doubling rule implies that the free form of the morpheme ends in a single t, as in getting.
neutral
train_97444
Since we have been been unsuccessful in finding such a lexicon, we extract the necessary information from two different resources: the CELEX lexical database (Baayen et al., 1995), which includes morphological analysis of words, and the Combilex speech lexicon (Richmond et al., 2009), which contains high-quality phonemic transcriptions.
the orthography of Serbo-Croatian was originally created according to the rule "write as you speak", so that the spelling can be unambiguously produced from pronunciation.
neutral
train_97445
If it turns out that the word belongs to a positive sentiment class, then its topic distribution is drawn from a biased Dirichlet prior φ (1) We set ω w = 1 if the word w is a positive seed word, otherwise, we set ω w = 0.
then, we remove non-alphabet characters, numbers, pronoun, punctuation and stop words from the text.
neutral
train_97446
As a methodological aside, we discuss the (in-)significance of conclusions being drawn from comparisons done on small sized datasets.
given this SVD, write the j th projection matrix as where T j ∈ R m×m is a diagonal matrix such that Finally, we note that the sum of projection matrices can be expressed as eigenvectors of matrix M , i.e.
neutral
train_97447
For instance, methods designed for measuring semantic similarity of WordNet synsets (Banerjee and Pedersen, 2002;Budanitsky and Hirst, 2006;Pilehvar et al., 2013) usually leverage lexicographic or structural information in this lexical resource.
tors by outperforming cosine in most cases.
neutral
train_97448
Given that the amount of contextual information gathered for a concept can be small, the resulting word-based vector can be sparse and as a consequence prone to noise, especially in the case of less frequent concepts.
given a pair of concepts, we first use the procedure described in Section 2 to obtain for each concept the two corresponding vector representations, i.e., word-based and synset-based.
neutral
train_97449
In our setting we are only interested in the positive specificity, i.e., the set of most relevant words appearing in the contextual information.
our approach combines knowledge from both resources, providing two advantages: (1) more effective measurement of similarity based on rich semantic representations, and (2) the possibility of measuring cross-resource semantic similarity, i.e., between Wikipedia pages and WordNet synsets.
neutral
train_97450
In computational biology, edit costs are defined in terms of mutation probabilities, which are irrelevant to our task.
by mixing and matching corrections from different annotators, we avoid the performance underestimation described in 1.(d).
neutral
train_97451
Once we have WAcc sys and WAcc base for each system, we can compare them to determine if the text has improved.
it is worth noting that this limitation does not extend to evaluation of error detection per se using such metrics.
neutral
train_97452
In the non-parallel data setting, only the target sequence f is observed and the source sequence e is hidden.
other models resort to approximation techniques -for example, the fertilitybased word alignment models apply hill-climbing and sampling heuristics in order to efficiently estimate the posteriors (Brown et al., 1993) From the computed posteriors q k we collect expected counts for each event, used to construct the M-step optimization objective.
neutral
train_97453
(2008) propose PostCAT which uses Posterior Regularization (Ganchev et al., 2010) to enforce posterior agreement between the two models.
the only extra work is in the Mstep, which optimizes a single (concave) objective function.
neutral
train_97454
The corpus contains a total number of 2,166 dialogues, including 15,453 utterances (10,571 for selftraining and 4,882 for testing).
in our approach, we parse all ASR-decoded utterances in our corpus using SEMAFOR 5 , a stateof-the-art semantic parser for frame-semantic parsing (Das et al., 2010;Das et al., 2013), and extract all frames from semantic parsing results as slot candidates, where the LUs that correspond to the frames are extracted for slot filling.
neutral
train_97455
This proves that inter-slot relations help decide a coherent and complete slot set and enhance the interpretability of semantic slots.
from the knowledge management perspective, empowering the system with a large knowledge base is of crucial significance to modern spoken dialogue systems.
neutral
train_97456
Gaussier (1999) pointed out that some lexical derivations involve character-level alternations, e.g., "c" and "ç. "
these types of pairs originally existed in S Seed but were amplified by LEXVAR.
neutral
train_97457
However, in general, these works use supervised learning frameworks (Popescu et al., 2011;Ritter et al., 2012), and/or they use either a coarse representation of events, which reduces to topic modeling or classification of entire tweets (Popescu et al., 2011;Becker et al., 2011;Ritter et al., 2012), or a simplified representation of events with few arguments (Sakaki et al., 2010;Popescu et al., 2011;Benson et al., 2011;Ritter et al., 2012).
for example, for a particular earthquake, the USGS reports a depth of 22 km., while NOAA reports 25 km 2 .
neutral
train_97458
The key idea is to explicitly search for concrete treebanks which are used to train parsing models.
the crucial factor here is to define N(D i ) and f D i (D).
neutral
train_97459
Le and Zuidema (2014)'s reranker is an exception among supervised parsers because it employs an extremely expressive model whose features are ∞order 2 .
it is worth noting that, in this case, each phase with iterated reranking could be seen as an approximation of hard-EM (see Equation 2) where the first step is replaced by instead of searching over the treebank space, the search is limited in a neighbour set N(D i ) generated by k-best parser P i .
neutral
train_97460
The tradeoff is that feature embeddings must be recomputed for each set of feature templates, unlike word embeddings, which can simply be downloaded and plugged into any NLP problem.
in general, we find that the hyperparameters that yield good word embeddings tend to yield good feature embeddings too.
neutral
train_97461
The MEN-3k dataset is crowd-sourced and contains much diversity, with word pairs evidencing similarity as well as relatedness.
our work, which tackles the stronger form of lexical ambiguity in polysemy falls into the latter two of three categories.
neutral
train_97462
Additionally EM+RETRO is more powerful, allowing to adapt more expressive models that can jointly learn other useful parameters -such as context vectors in the case of skip-gram.
this allows the different neighborhoods of each sense-specific vector to tease it apart from its sense-agnostic vector.
neutral
train_97463
Our findings suggest several avenues for future research.
an important consideration is the relations we use and the weights we associate with them.
neutral
train_97464
Since Brown clusters are mostly syntactic/semantic in nature and do not automatically distinguish positive or negative sentiment, we additionally performed multiple experiments to use clusters while incorporating additional sentiment information: On one hand, we try to incorporate the judgements on the Amazon near-domain dataset more directly into the clusters by using the repeated bisecting K-Means algorithm as implemented in CLUTO (Zhao and Karypis, 2005), with previous/next word, part-of-speech tag, and the score of the containing review as features.
(2010), and Emerson and Declerck (2014).
neutral
train_97465
The four topics correspond to vision, neural network, speech recognition and electronic circuits respectively.
existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics.
neutral
train_97466
The similarity is computed based on statistics such as co-occurrence which are unable to accommodate the subtlety that whether two words labeled as similar are truly similar depends on which topic they appear in, as explained by the aforementioned examples.
to generate a document d, PtM first samples a topic proportion vector, then for each word w in d, samples a topic indicator z and generates w from the topic-word multinomial corresponding to topic z.
neutral
train_97467
Since we tag entire phrases at once, we can easily assign each word in the phrase to one of these four entity-relative positions.
this reflects a plausible training scenario, with train and dev drawn from the same pool, but with distinct tests drawn from later in time.
neutral
train_97468
This result is far better than the twenty hours required by SLDA to train on TRIPADVISOR.
similarly, in the TRIPADVIsOR data, both ANCHOR and sUP AN-CHOR share topics about specific destinations, but only sUP ANCHOR discovers a topic describing "disgusting" hotel rooms.
neutral
train_97469
Unsurprisingly, unsupervised models (LDA) produce the best topic quality.
for clarity, we pruned words which appear in more than 3000 documents as these words appear in every topic.
neutral
train_97470
Other words that were near the convex hull boundary in the unaugmented representation may become anchor words in the augmented representation because they capture both topic and sentiment ("anti-lock" vs. "lemon").
some of the original anchor words will remain, and some will be replaced by sentiment-specific anchor words.
neutral
train_97471
It also has a test set, but its annotation is not made public.
both representation and evaluation are now reasonably efficient.
neutral
train_97472
We thus adopted a novel approach to evaluation by simulating a grounded learning scenario using the GENIA event extraction dataset (Kim et al., 2009).
(In fact, the original distant supervision algorithm is exactly equivalent to this form, with κ = ∞.)
neutral
train_97473
which are expensive and time-consuming to acquire (Zelle and Mooney, 1993;Zettlemoyer and Collins, 2005;Zettlemoyer and Collins, 2007).
future directions include: PubMed-scale pathway extraction; application to other domains; incorporating additional complex states to address syntax-semantics mismatch; learning vector-space representations for complex states; joint syntacticsemantic parsing; incorporating reasoning and other sources of indirect supervision.
neutral
train_97474
Table 1 shows GUSPEE's results on GENIA event extraction.
annotating example sentences is expensive and time-consuming.
neutral
train_97475
If the minimum subtree contains NULL, either in an event or argument state, it signifies a non-event and would be ignored.
the sentence also discloses important contextual information, i.e., BCL regulates RFLAT by stimulating the inhibitive effect of IL-10, and likewise the inhibition of RFLAT by IL-10 is controlled by BCL.
neutral
train_97476
The magnitude indicates the strength of the association.
the Arabic sentiment system will benefit from extended sentiment lexicons and features derived specifically for the Arabic language.
neutral
train_97477
Furthermore, many sentiment resources essential to automatic sentiment analysis (e.g., sentiment lexicons) exist only in English.
details of the system are described in Section 6.
neutral
train_97478
We use the continuous bag-of-words model introduced by (Mikolov et al., 2013), and the tool word2vec 2 to obtain the word embeddings.
the results show that systems with better ROUGE-2 value indeed can assign higher weights to correct bigrams, allowing the ILP decoding process to select these bigrams, which leads to a better sentence selection.
neutral
train_97479
Second, when we use the features from only one external resource, the results from some resources are competitive compared to that from the system using internal features.
second, to estimate the bigrams' weights, in addition to using information from the test documents, such as document frequency, syntactic role in a sentence, etc., we utilize a variety of external resources, including a corpus of news articles with human generated summaries, Wiki documents, description of name entities from DBpedia, WordNet, and sentiWordNet.
neutral
train_97480
By the constraints of the algorithm, a head word x h must combine with each of its left and right dependents.
yet it appears to be completely unpredictable which will be preferred by a particular subcommunity or used in a particular application.
neutral
train_97481
We have two ways to use the score values: 1) Augumenting the feature vector φ(u, v) with these scores.
we also have a combined system (KnowComb), which uses the schema knowledge to add features for learning as well as constraints for inference.
neutral
train_97482
To be brief, we use the shorthand notation ) is a binary vector of size three.
these results show that our additional Predicate Schemas do not harm the predictions for regular mentions.
neutral
train_97483
(In our experiments, we use a 2B words corpus and a 100k vocabulary.)
the neural language models we used to report results throughout this paper are roughly 400 MB in size.
neutral
train_97484
Nowadays, it is becom- ing more and more common for these devices to include reasonably powerful GPUs, supporting the idea that further scaling is possible if necessary.
for some word w i in a given corpus, let h i denote the conditioning context w i−1 , .
neutral
train_97485
The most popular language model implementation is a back-off n-gram model with Kneser-Ney smoothing (Chen and Goodman, 1999).
when the n-gram model is removed, class factored models perform better (at least for fr→en and en→de), despite being only an approximation of the fully normalised models.
neutral
train_97486
(2012) propose a joint language/perception model to learn attribute names in a physical environment.
several directions of future work are very promising.
neutral
train_97487
Previous approaches for semantic interpretation include domain-specific grammars (Lemon et al., 2001) and open-domain parsers together with a domain-specific lexicon (Rosé, 2000).
this definition of the description vector relies upon the structure of the domain by factorizing the attributes of entities.
neutral
train_97488
KNOW-BOT starts with an empty dialog-level knowledge graph (dKG).
a mixed-initiative strategy utilizes focused prompts (line S4 in Figure 1) to introduce potentially related concepts.
neutral
train_97489
This allows us to determine which effects are due to differences in genre and which are due to having a smaller training set.
ppA is a form of frontotemporal dementia which is characterized by progressive language impairment without other notable cognitive impairment.
neutral
train_97490
This suggests that standard methods of sentence segmentation for spontaneous speech can be effective on PPA speech as well.
the remaining metrics in table 3 were calculated using Lu's Syntactic Complexity Analyzer (SCA) (Lu, 2010).
neutral
train_97491
Our analysis so far suggests that some syntactic units are relatively impervious to the automatic sentence segmentation, while others are more susceptible to error.
in many cases, sentences ending with VB are actually statements about the difficulty of the task, rather than narrative content; e.g., that's all i can say, i can't recall, or i don't know.
neutral
train_97492
in our case parsing performance is not directly dependent on the WER but rather on the type of errors made.
since the RoboCup corpus is rather small and n-gram models are typically learned from large amounts of data, in addition we interpolated the trigram models trained solely on the in-domain RoboCup corpus each with a large background language model trained on a broadcast news corpus, i.e.
neutral
train_97493
While in case of applying semantically motivated recognition grammars the WER increases, it must be noted that in cases in which no back off models were applied this is to some extent due to OOG utterances (as these yield several deletions compared to the reference data).
even in 3 The results were: interpolated/adapted LM: WER: 13.43%, F1: 71.22%, semantic grammar + interpolated/adapted LM backoff: WER: 13.85, F1: 84.6%, syntactic recognition grammar: WER: 18.98%, F1: 70.86%, syntactic recognition grammar + trigram back off: WER: 13.98%, F1: 71.27%.
neutral
train_97494
n-grams, while we explore template-based grammars which can capture longdistance linguistic dependencies.
we created standard trigram language models from the written training data without making use of concurrent perceptual context information using SRILM (Stolcke, 2002).
neutral
train_97495
Because it is a conditional model, LOGRESP lacks any built-in capacity for semi-supervised learning.
because ITEM-RESP has no model of the data x, it receives no benefit from unannotated data One way to make ITEMRESP data-aware is by adding a discriminative log-linear data component (Raykar et al., 2010;Liu et al., 2012).
neutral
train_97496
The two optimization problems involved in the dual are hard in general.
in training, we would like to minimize the loss function ∆ by which the model will be assessed at test time.
neutral
train_97497
While we adopt a bag-of-words approach for practical reasons (memory and run-time), our multi-task framework is extensible to other methods for sentence/document representations, such as those based on convolutional networks (Kalchbrenner et al., 2014;Shen et al., 2014;Gao et al., 2014b), parse tree structure (Irsoy and Cardie, 2014), and run-time inference (Le and Mikolov, 2014).
for concreteness, throughout this paper we will use query classification as the classification task and web search as the ranking task.
neutral
train_97498
Our Basic model outperforms DDN for both nouns and verbs, despite training on less data.
this could occur for example if we extracted our training data from a morphologically annotated corpus.
neutral
train_97499
Where previous work leveraged large, rigid rules to span paradigms, our work is characterized by small, flexible rules that can be applied to any inflection, with features determining what rule sequence works best for each pairing of a base-form with an inflection.
rather than training on 50 and 100 tables, we train on 40 and 80, but compare the results with the models trained on 50 and 100, respectively.
neutral