id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_12400
|
The model translates an input graph by segmenting it into subgraphs and generates a complete translation by combining subgraph translations left-to-right.
|
the model treats different graph segmentations equally.
|
contrasting
|
train_12401
|
(Cao et al., 2014;Huck et al., 2013) propose different approaches to directly train LRM for Hiero rules.
|
these approaches are designed for CKY-decoding and cannot be directly used or adapted for LR-Hiero decoding which uses an Earley-style parsing algorithm.
|
contrasting
|
train_12402
|
Long-distance reorderings as in this example are not uncommon and their benefit on verbal translation is intuitively clear.
|
reordering comes at the price of separating the verb and its direct object.
|
contrasting
|
train_12403
|
NMT has been successfully tackled by different groups using the sequence-to-sequence framework (Kalchbrenner and Blunsom, 2013;Cho et al., 2014;Sutskever et al., 2014).
|
multi-modal MT has just recently been addressed by the MT community in a shared task .
|
contrasting
|
train_12404
|
Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other.
|
we propose using continuous vector representations of language.
|
contrasting
|
train_12405
|
The addition of sentences with feminine forms to the language model reduced the number of feminine pronouns translated as masculine or neuter.
|
we still observed cases where the translation did not reflect the morphological annotation in the source.
|
contrasting
|
train_12406
|
As shown by the small oracle experiments with manually inserted annotations, the potential for improvement through co-reference resolution is significant.
|
pre-processing errors from tagging, parsing, and the actual co-reference resolution reduce the effect somewhat, especially for the less frequent feminine forms.
|
contrasting
|
train_12407
|
The upper bound of this ratio is 1.0, which means that ACNN would make the same mistakes as GBT would.
|
from our experiments, the ratio turned out to be 0.37, which means that GBT and ACNN make different mistakes more than 60% of the time.
|
contrasting
|
train_12408
|
On the other hand, removal of the rare tokens (words appearing in less than 20 titles) seem to Manual translation of the Japanese product title into English: Manual translation of the Japanese product title into English: Manual translation of the Japanese product title into English: Manual translation of the Japanese product title into English: have negligible effect on the context of the titles from the subset of 8000 titles.
|
the effect is a little more pronounced for the context of the titles from the subset of the 18 categories for which ACNN does better than GBT, but, with a mean error difference of only half of that for the other 17 categories on which it does worse.
|
contrasting
|
train_12409
|
Earlier work of topic modeling is fully unsupervised while recently knowledge bases (KB) begin to be incorporated into semisupervised schemes (Wang et al., 2014;Zhai et al., 2010;Chen et al., 2014).
|
existing approaches have limitations.
|
contrasting
|
train_12410
|
This process is easier when abstracts are structured, i.e., the text in an abstract is divided into semantic headings such as objective, method, result, and conclusion.
|
a significant portion of published paper abstracts is unstructured, which makes it more difficult to quickly access the information of interest.
|
contrasting
|
train_12411
|
8 The improvement of results for the visual labels could thus be attributed to the additional hidden layer.
|
the performance difference for the textual labels between baseline vidual label modalities and the combined labels differ, meaning that the nDCG numbers are not directly comparable.
|
contrasting
|
train_12412
|
F-Score Trigger Function The main criterion of NER task is F-score.
|
high label accuracy does not mean high F-score.
|
contrasting
|
train_12413
|
The above work has shown that MTL can be effectively used to improve NNs by leveraging different kinds of data.
|
the obtained improvement over the base DNN was limited to 1-2 points, raising the question if this is the kind of enhancement we should expect from MTL.
|
contrasting
|
train_12414
|
Finally, a recent work close to ours is (Guzmán et al., 2016), which builds a neural network for solving Task A of SemEval.
|
this does not approach the problem as MTL.
|
contrasting
|
train_12415
|
Their results showed a competitive gain over strong baselines.
|
we have presented a model that can also exploit a joint question and comment representation as well as the dependencies among the different SemEval Tasks.
|
contrasting
|
train_12416
|
Although these two users, especially beck-s, have more folders than others and thus present more challenges for classifiers, user embeddings has potential to effectively introduce user-token interactions for organizing information.
|
the improvements based on embedding features are less apparent for williams-w3, whose folder categorization was the most unbalanced among all (i.e., a majority of emails belong to the same folder, making the prediction fairly easy with just few signals).
|
contrasting
|
train_12417
|
For the CR task, we are dealing with a binary classification problem for each pair of EVENT and/or TIMEX3.
|
considering all pairs of entities within a sentence would give us an unbalanced data set with a very large amount of negative examples.
|
contrasting
|
train_12418
|
We plan to use argumentative zoning as a first step for IR and shallow document understanding tasks like summarization.
|
to hierarchical segmentation (e.g.
|
contrasting
|
train_12419
|
In such cases, the training data could be separated linearly by expanding all combinations of features as new ones, and projecting them onto a higherdimensional space.
|
such a naive approach requires enormous computational overhead.
|
contrasting
|
train_12420
|
If we could calculate the dot products from xz and x2 directly without considering the vectors ~(xz) and ¢(x2) projected onto the higher-dimensional space, we can reduce the computational complexity considerably.
|
namely, we can reduce the computational overhead if we could find the function K that satisfies: since we do not need itself for actual learning and classification, 1In general, ,It(x) is a mapping into Hilbert space.
|
contrasting
|
train_12421
|
Using such information about modifiers in the training phase has no difficulty since they are clearly available in a tree-bank.
|
they are not known in the parsing phase of the test data.
|
contrasting
|
train_12422
|
The simplest and most effective way to achieve better accuracy is to increase the training data.
|
the proposed method that uses all candidates that form dependency relation requires a great amount of time to compute the separating hyperplaneas the size of the training data increases.
|
contrasting
|
train_12423
|
Corpus-based grz.mmar induction relies on using many hand-parsed sentences as training examples.
|
the construction of a training corpus with detailed syntactic analysis for every sentence is a labor-intensive task.
|
contrasting
|
train_12424
|
For instance, it is difficult to induce a grammar from a corpus of raw text; but the task becomes much easier when the training sentences are supplemented with their parse trees.
|
appropriate supervised training data may be difficult to obtain.
|
contrasting
|
train_12425
|
There has been much work done on extracting Context-Free grammars (CFGs) (Shirai et al., 1995;Charniak, 1996;Krotov et al., 1998).
|
extracting LTAGs is more complicated than extracting CFGs because of the differences between LTAGs and CFGs.
|
contrasting
|
train_12426
|
These look a lot like the statistics a Markov Model would use.
|
in the maximum entropy framework it is possible to easily define and incorporate much more complex statistics, not restricted to n-gram sequences.
|
contrasting
|
train_12427
|
However, we note that our parameter estimation algorithm directly uses equation (1).
|
ratnaparkhi (1996: 134) suggests use of an approximation summing over the training data, which does not sum over possible tags: we believe this passage is in error: such an estimate is ineffective in the iterative scaling algorithm.
|
contrasting
|
train_12428
|
Ideally the third item can be estimated by using the forward-backward algorithm (Rabiner 1989) recursively for the first-order (Rabiner 1989) or second-order HMMs (Watson and Chunk 1992).
|
several approximations on it will be attempted later in this paper instead.
|
contrasting
|
train_12429
|
Above experiments shows that adding more contextual information into lexicon significantly improves the chunking accuracy.
|
this improvement is gained at the expense of a very large lexicon and we fred it difficult to merge all the above context-dependent lexicons in a single lexicon to further improve the chunking accurracy because of memory limitation.
|
contrasting
|
train_12430
|
The bigram language models are popular, in much language processing applications, in both Indo-European and Asian languages.
|
when the language model for Chinese is applied in a novel domain, the accuracy is reduced significantly, from 96% to 78% in our evaluation.
|
contrasting
|
train_12431
|
That is, a document that contains terms at, a2 and a3 may be ranked higher than a document which contains terms at and bl.
|
the second document is more likely to be relevant since correct translations of the query terms are more likely to co-occur (Ballesteros and Croft, 1998).
|
contrasting
|
train_12432
|
Ballesteros, for example, used the INQUERY (Callan et al, 1995) synonym operator to group translations of different query terms.
|
if a term has two translations in the target language, it will treat them as equal even though one of them is more likely to be the correct translation than the other.
|
contrasting
|
train_12433
|
The field of information retrieval (IR) is the traditional discipline that addresses this problem.
|
most of the prior work in IR deal more with document retrieval rather than "information" retrieval.
|
contrasting
|
train_12434
|
Prior to 1999, the other notable research work on question answering that is designed to work on unrestricted text (from encyclopedia) is (Kupiec, 1993).
|
no large scale evaluation was attempted, and the work was not based on a machine learning approach.
|
contrasting
|
train_12435
|
It has been found that both the shallow processing techniques of IR, as well as the more linguistic-oriented natural language processing techniques are needed to perform well on the TREC-8 QA track.
|
for our current QA work on reading comprehension, because the answer for each question comes from the associated story, no sophisticated IR indexing and retrieval techniques are used.
|
contrasting
|
train_12436
|
1 1In an earlier version of AQUAREAS, we simply used the raw word match score m as the feature.
|
the learned classifiers did not perform well.
|
contrasting
|
train_12437
|
Traditional (rationalist) approaches to constructing database interfaces require an expert to hand-craft an appropriate semantic parser (Woods, 1970;Hendrix et al., 1978).
|
such hand-crafted parsers are time consllming to develop and suffer from problems with robustness and incompleteness even for domain specific applications.
|
contrasting
|
train_12438
|
As expected, the accuracy of all methods grows (towards the upper bound) as more tuning corpus is added to the training set.
|
the relation between X+%Y-Y and %Y-Y reveals some interesting facts.
|
contrasting
|
train_12439
|
Research "into the automatic acquisition of subcategorization frames (SCFS) from corpora is starting to produce large-scale computational lexicons which include valuable frequency information.
|
the accuracy of the resulting lexicons shows room for improvement.
|
contrasting
|
train_12440
|
This could make the results appear better for in-corpora experiments.
|
in the crosscorpora experiments training and testing example come from different documents.
|
contrasting
|
train_12441
|
When trained as a single classifier (e.g., (Roth and Zelenko, 1998) uses each -tagged example as a positive example for and a negative example for all other tags.
|
the SM classifier is trained on a -tagged example of word , by using it as a positive example for and a negative example only for the effective confusion set.
|
contrasting
|
train_12442
|
Furthermore, during classification the president consults the same members that were used to prepare its training set.
|
in crossvalidation stacking the president is tested using members that have received more training than those that prepared its training set.
|
contrasting
|
train_12443
|
The same memory-based learner was used as the president.
|
we experimented with several configurations, varying the neighborhood size (k) from 1 to 10, and providing the president with the m best wordattributes, as in Section 1, with m ranging from 50 to 700 by 50.
|
contrasting
|
train_12444
|
Our goals are more general than those of information extraction, and so this work should be helpful for that task.
|
our approach will not solve issues surrounding previously unseen proper nouns, which are often important for information extraction tasks.
|
contrasting
|
train_12445
|
In the training set, these nouns appear only in NCs that have been labeled as belonging to relation 22.
|
if we look at relations 14 and 15, we find a wider range of words, and in some cases Table 1.
|
contrasting
|
train_12446
|
This unfortunately eliminates acronyms like "U. S." and phrasal verbs like "throw up."
|
discarding some words may be worthwhile if the final list of n-grams is richer in terms of MRD headwords.
|
contrasting
|
train_12447
|
These formulations suggest that several of the probabilistic algorithms we have seen include non-compositionality measures already.
|
since the probabilistic algorithms rely only on distributional information obtained by considering juxtaposed words, they tend to incorporate a significant amount of non-semantic information such as syntax.
|
contrasting
|
train_12448
|
The problem of abbreviation processing has attracted relatively little attention in NLP field.
|
technical documents use a lot of abbreviations to represent domainspecific knowledge.
|
contrasting
|
train_12449
|
This may underestimate the difculty of the Brown corpus by including sentences from the same documents in training and test sets.
|
because of the variation within the Brown corpus, we felt that a single contiguous test section might not be representative.
|
contrasting
|
train_12450
|
However, its priors cause it to incorrectly predict that aa belongs to class 1.
|
maximizing CL will push the prior for sense 1 arbitrarily close to zero.
|
contrasting
|
train_12451
|
If we want to be guaranteed a non-deficient joint interpretation, we can require equality.
|
if we relax the equality then we have a larger feasible space which may give better values of our objective.
|
contrasting
|
train_12452
|
If w occurs with s in an example where other good indicator words are present, then those other words' large weights will explain the occurrence of s, and without w having to have a large weight, its expected count with s in that instance will approach 1.
|
if no trigger words occur in that instance, there will be no other explanation for s other than the presence of w and the other non-indicative words.
|
contrasting
|
train_12453
|
Our technique could easily be combined with these techniques, presumably leading to even better results.
|
since we build our decision lists from last to first, rather than first to last, the local probability is not available as the list is being built.
|
contrasting
|
train_12454
|
If one wants a probabilistic decision list learner, this is clearly the algorithm to use.
|
if probabilities are not needed, then TBL can produce lower error rates, with still fewer rules.
|
contrasting
|
train_12455
|
However, if probabilities are not needed, then TBL can produce lower error rates, with still fewer rules.
|
if one wants either the lowest entropies or highest accuracies, then it appears that linear models, such as maxent or the perceptron algorithm with margin work even better, at the expense of producing much larger models.
|
contrasting
|
train_12456
|
The recent work of Pedersen (2001a) and evaluated a variety of learning algorithms on the SENSEVAL-1 data set.
|
all of these research efforts concentrate only on evaluating different learning algorithms, without systematically considering their interaction with knowledge sources.
|
contrasting
|
train_12457
|
In SENSEVAL-2, the various Duluth systems (Pedersen, 2001b) attempted to investigate whether features or learning algorithms are more important.
|
relative contribution of knowledge sources was not reported and only two main types of algorithms (Naive Bayes and decision tree) were tested.
|
contrasting
|
train_12458
|
However, they reported recall of only 56.8%.
|
our implementation of SVM using the two knowledge sources of surrounding words and local collocations achieves recall of 61.8%.
|
contrasting
|
train_12459
|
As a hill-climbing procedure, the algorithm terminates when removal of any of the rules in the ruleset fails to improve performance.
|
to most existing algorithms for coreference resolution, RULE-SELECT establishes a tighter connection between the classification-and clustering-level decisions for coreference resolution and ensures that system performance is optimized with respect to the coreference scoring function.
|
contrasting
|
train_12460
|
Charniak (1997) calls this a "Treebank grammar" and gambles that assigning 0 probability to rules unseen in training data will not hurt parsing accuracy too much.
|
there are four reasons not to use a Treebank grammar.
|
contrasting
|
train_12461
|
Indeed a good deal of syntax induction work has been carried out in just this framework (Stolcke and Omohundro, 1994;Chen, 1996;De Marcken, 1996;Grünwald, 1996;Osborne and Briscoe, 1997).
|
all such work to date has adopted rather simple prior distributions.
|
contrasting
|
train_12462
|
The very rarity of these rules makes it impossible to create a table like Table 1.
|
rare rules can be measured in the aggregate, and the result suggests that the same kinds of transformations are indeed useful-perhaps even more useful-in predicting them.
|
contrasting
|
train_12463
|
It is worthwhile to compare the statistical approach here with some other approaches: • Transformation models are similar to graphical models: they allow similar patterns of deductive and abductive inference from observations.
|
the vertices of a transformation graph do not represent different random variables, but rather mutually exclusive values of the same random variable, whose probabilities sum to 1.
|
contrasting
|
train_12464
|
Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines.
|
the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization.
|
contrasting
|
train_12465
|
The bulk of such work has focused on topical categorization, attempting to sort documents according to their subject matter (e.g., sports vs. politics).
|
recent years have seen rapid growth in on-line discussion groups and review sites (e.g., the New York Times' Books web page) where a crucial characteristic of the posted articles is their sentiment, or overall opinion towards the subject matter -for example, whether a product review is positive or negative.
|
contrasting
|
train_12466
|
They also handily beat our two human-selected-unigram baselines of 58% and 64%, and, furthermore, perform well in comparison to the 69% baseline achieved via limited access to the test-data statistics, although the improvement in the case of SVMs is not so large.
|
in topic-based classification, all three classifiers have been reported to use bagof-unigram features to achieve accuracies of 90% and above for particular categories (Joachims, 1998;Nigam et al., 1999) 9 -and such results are for settings with more than two classes.
|
contrasting
|
train_12467
|
This would not rule out the possibility that bigram presence is as equally useful a feature as unigram presence; in fact, Pedersen (2001) found that bigrams alone can be effective features for word sense disambiguation.
|
comparing line (4) to line (2) shows that relying just on bigrams causes accuracy to decline by as much as 5.8 percentage points.
|
contrasting
|
train_12468
|
11 Alternatively, we could have tried integrating frequency information into MaxEnt.
|
feature/class functions are traditionally defined as binary (Berger et al., 1996); hence, explicitly incorporating frequencies would require different functions for each count (or count bin), making training impractical.
|
contrasting
|
train_12469
|
12 This serves as a crude form of word sense disambiguation (Wilks and Stevenson, 1998): for example, it would distinguish the different usages of "love" in "I love this movie" (indicating sentiment orientation) versus "This is a love story" (neutral with respect to sentiment).
|
the effect of this information seems to be a wash: as depicted in line (5) of Figure 3, the accuracy improves slightly for Naive Bayes but declines for SVMs, and the performance of MaxEnt is unchanged.
|
contrasting
|
train_12470
|
In terms of relative performance, Naive Bayes tends to do the worst and SVMs tend to do the best, although the differences aren't very large.
|
we were not able to achieve accuracies on the sentiment classification problem comparable to those reported for standard topic-based categorization, despite the several different types of features we tried.
|
contrasting
|
train_12471
|
We also assume the existence of a special Null word in the source language that generates words in the target language.
|
we define a different model that better constrains and conditions generation from Null.
|
contrasting
|
train_12472
|
Thus this probability distribution provides prior knowledge of the possible translations of a word based only on its part of speech.
|
p( A "¨£ z f D g ) should not be too sharp or 4 Since we are only concerned with alignment here and not generation of candidate translations the factor p( e,eT) can be ignored and we omit it from the equations for the rest of the paper.
|
contrasting
|
train_12473
|
Following (Briscoe and Carroll, 1993), conflict resolution is based on contextual information extracted from the so called Instantaneous Description or Configuration: a stack, representing the control memory of the LR parser, and a lookahead sequence, here limited to one symbol.
|
1 while Briscoe and Carroll invested on massive parallel computation of the possible parsing paths, with pruning and posterior ranking, we ex-periment with a simple greedy depth-first technique with limited amount of backtracking, that resembles to a certain extent the commitment/recovery models from the psycholinguistic research on human language processing, supported by the occurrence of "garden paths".
|
contrasting
|
train_12474
|
The reason for such increase in structure is not quite a particular decision of ours, but a consequence of using a sound grammar under the TAG grammatical formalism.
|
9 having concluded our manifesto, we understand that algorithms that try to keep precision as high as the recall necessarily have losses in recall compared to if they ignored the precision, and therefore in order to have fair comparison with them and to improve the credibility of our results, we flattened the parse trees in a post-processing step, using a simple rule-based technique on top of some frequency measures for individual grammar trees gathered by (Xia, 2001) and the result is presented in the bottom lines of the table.
|
contrasting
|
train_12475
|
In practice, the instances of this class of error are all cases where the computer can't detect the error for certain.
|
for all Type B errors, once detected, the correction that needs to be made is clear, at least to a human observer with access to the annotation guidelines.
|
contrasting
|
train_12476
|
To compensate, statistical models were used to separate the meaningful semantic associations from the spurious ones.
|
our work aims to identify "strong" syntactic heuristics that can isolate instances of general structures that reliably identify the desired semantic relations.
|
contrasting
|
train_12477
|
At prima facie, the Viterbi alignment for the first sentence pair appears incorrect because we, as humans, have a natural tendency to build alignments between the smallest phrases possible.
|
note that the choice made by our model is quite reasonable.
|
contrasting
|
train_12478
|
The model described in this paper cannot learn that the English word "not" corresponds to the French words "ne" and "pas".
|
our model learns to deal with negation by memorizing longer phrase translation equivalents, such as ("ne est pas", "is not"); ("est inadmissible", "is not good enough"); and ("ne est pas ici", "is not here").
|
contrasting
|
train_12479
|
And Marcu (2001) extracts phrase translations from automatically aligned corpora and uses them in conjunction with a word-for-word statistical translation system.
|
none of these approaches learn simultaneously the translation of phrases/templates and the translation of words.
|
contrasting
|
train_12480
|
We have chosen to present MBR decoding using the IBM-3 statistical MT models implemented via WFSTs.
|
mBR decoding is not restricted to this framework.
|
contrasting
|
train_12481
|
We have presented these alignment loss functions to explore how linguistic knowledge might be incorporated into machine translation systems without building detailed statistical models of these linguistic features.
|
we stress that the MBR decoding procedures described here do not preclude the construction of complex MT models that incorporate linguistic features.
|
contrasting
|
train_12482
|
In principle it can assist in the production of a target text with minimal disruption to a translator's normal routine.
|
recent evaluations of a prototype prediction system showed that it significantly decreased the productivity of most translators who used it.
|
contrasting
|
train_12483
|
Moreover, sentence (3) omits the goal argument entirely.
|
as Figure 2 shows, the combination of these verbalizations, as computed by our multiple-sequence alignment method, exhibits high structural similarity to the semantic input: the indicated "sausage" structures correspond closely to the three arguments of show-from.
|
contrasting
|
train_12484
|
Librarians and search professionals have traditionally favored Boolean keyword search systems, which, when successful, return a small set of relevant hits.
|
the success of these systems critically depends on the choice of the right keywords and the appropriate Boolean operators.
|
contrasting
|
train_12485
|
(2001) also iteratively reformulate queries based partly on the search results.
|
their mechanism for query reformulation is heuristic-based.
|
contrasting
|
train_12486
|
However, those words are not necessarily useful for retrieval.
|
low-frequency words appearing in specific documents are often effective query terms.
|
contrasting
|
train_12487
|
For example, in the case where "kankitsu (citrus)" is not listed in the dictionary, this word should be transcribed as /ka N ki tsu/.
|
it is possible that this word is mistakenly transcribed, such as /ka N ke tsu/ and /ka N ke tsu ke ko/.
|
contrasting
|
train_12488
|
Then, in the second stage, we replace detected OOV words with identified index terms so as to complete the transcription, and re-perform text retrieval to obtain final outputs.
|
we do not re-perform speech recognition in the second stage.
|
contrasting
|
train_12489
|
In spoken document retrieval, an open-vocabulary method, which combines recognition methods for words and syllables in target speech documents, was also proposed (Wechsler et al., 1998).
|
this method requires an additional computation for recognizing syllables, and thus is expensive.
|
contrasting
|
train_12490
|
6 Query Completion As explained in Section 3, the basis of the query completion module is to correspond OOV words detected by speech recognition (Section 4) to index terms used for text retrieval (Section 5).
|
to identify corresponding index terms efficiently, we limit the number of documents in the first stage retrieval.
|
contrasting
|
train_12491
|
In principle, terms that are indexed in topranked documents (those retrieved in the first stage) and have the same sound with detected OOV words can be corresponding terms.
|
a single sound often corresponds to multiple words.
|
contrasting
|
train_12492
|
score) when we use hierarchical structure.
|
the computation of the former is far more efficient than the latter.
|
contrasting
|
train_12493
|
Recent work by Banko and Brill (2001) suggests that this would not necessarily be true if very large training corpora were available.
|
their results are limited by the simplicity of their evaluation task and individual classifiers.
|
contrasting
|
train_12494
|
The simplest evaluation measure is direct comparison of the extracted thesaurus with a manuallycreated gold standard (Grefenstette, 1994).
|
on smaller corpora direct matching is often too Bernard, 1990) and Roget's (Roget, 1911) thesauri and the head ordered Moby (Ward, 1996) thesaurus.
|
contrasting
|
train_12495
|
Grefenstette (1998) observes that this approach suffers from an acute data sparseness problem if the corpus counts are obtained from a conventional corpus such as the British National Corpus (BNC) (Burnard, 1995).
|
as Grefenstette (1998) demonstrates, this problem can be overcome by obtaining counts through web searches, instead of relying on the BNC.
|
contrasting
|
train_12496
|
Because these two LMs tightly integrate the word information jointly with the tag distribution, the trigram information is already represented.
|
the cSuperARV LM and Chelba's and Charniak's parserbased LMs have much lower correlations, indicating they have much lower overlap with the trigram.
|
contrasting
|
train_12497
|
This assumption can be formulated as a co-occurrence model for headword prediction: that is, the probability of a headword is determined by the occurrence of other headwords within a window.
|
in our experiments, we instead used an interpolated probability First, co-occurrence models do not predict words from left to right, and are thus very difficult to interpolate with trigram models for decoding.
|
contrasting
|
train_12498
|
In the TREC QA track, there is no distinction made in scoring between returning a wrong answer to a question for which an answer exists and returning no answer.
|
to deploy a real system, we need the capability of making a trade-off between precision and recall, allowing the system not to answer a subset of questions, in hopes of attaining high accuracy for the questions which it does answer.
|
contrasting
|
train_12499
|
From the Tables 3 and 4, it is clear that the 7-gram and 9-gram models are quite similar to the 5-gram model both in the performance and in the distribution of correct/over-generated named entities.
|
variable length models have distribution of correct/over-generated named entities a lit- tle different from that of the 5-gram model.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.