id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_20100
|
For the same parser performance level, it selects the fewest number of sentences for a human to check and requires the human to make the least number of corrections.
|
as we have seen in the earlier experiment, very few sentences in the unlabeled pool satisfy its stringent criteria, so it ran out of data before the parser was trained to convergence.
|
contrasting
|
train_20101
|
Alternatively, the number of documents that contain the pair can also be used.
|
the nature of the language tests in this work make it impractical to be applied.
|
contrasting
|
train_20102
|
In a sequence will to fight, that trend indicates that will should be a noun rather than a modal verb.
|
that effect is completely lost in a CMM like (a): P(t will |will, star t ) prefers the modal tagging, and P(TO|to, t will ) is roughly 1 regardless of t will .
|
contrasting
|
train_20103
|
to-left model, fight will receive its more common noun tagging by symmetric reasoning.
|
the bidirectional model (c) discussed in the next section makes both directions available for conditioning at all locations, using replicated models of P(t 0 |t −1 , t +1 , w 0 ), and will be able to get this example correct.
|
contrasting
|
train_20104
|
Since this model has roughly twice as many tag-tag features, the fact that it outperforms the unidirectional models is not by itself compelling evidence for using bidirectional networks.
|
it also outperforms model L+L 2 which adds the t 0 , t −2 secondprevious word features instead of next word features, which gives only 96.05% (and R+R 2 gives 95.25%).
|
contrasting
|
train_20105
|
When an information seeker can readily think up a suitable term or linguistic expression to represent the information need, direct searching of text by user-generated terms is faster and more effective than browsing.
|
when users do not know (or can't remember) the exact expression used in relevant documents, they necessarily struggle to find relevant information in full-text search systems.
|
contrasting
|
train_20106
|
The extracts generated are in size shorter than the original texts.
|
the number of sentences that E states emit cannot be predetermined.
|
contrasting
|
train_20107
|
In the LNK task, incorrectly flagging two stories as being on the same event is considered a false alarm.
|
in the NED task, incorrectly flagging two stories as being on the same event will cause a true first story to be missed.
|
contrasting
|
train_20108
|
A LNK system should minimize false alarms by identifying only linked stories, which results in high precision for LNK.
|
a NED system will minimize false alarms by identifying all stories that are linked, which translates to high recall for LNK.
|
contrasting
|
train_20109
|
A corpus-based approach is able to quickly build a machine translation system for a new domain if a bilingual corpus of that domain is available.
|
if only a small-sized corpus is available, a low translation quality is obtained.
|
contrasting
|
train_20110
|
These methods evaluate the quality of the translation by measuring the similarity between machine translation results and translations done by humans (called references).
|
the accuracy increases when multiple references are applied because one source sentence can be translated into multiple target expressions.
|
contrasting
|
train_20111
|
However, no nodes contain only the links 'get', and 'me' in the lower sentence.
|
focusing on the upper phrase "get me a taxi," it contains four word links that correspond to the lower phrase "get a taxi for me", and they have the same syntactic category.
|
contrasting
|
train_20112
|
The hope is that the same techniques will work for extracting prefixes.
|
even that will not handle the complex combi-nations of infixes that are possible in agglutinative languages like Turkish or polysynthetic languages like Inuktitut.
|
contrasting
|
train_20113
|
In naturally occurring text, the more frequent sense for the two-sense distinction is reported to occur 92% of the time on average; this result has been found both on the CACM collection and on the WordNet SEMCOR sensetagged corpus (Sanderson & van Rijsbergen, 1999).
|
the challenge for WSD programs is to work on the harder cases, and the artificially constructed SENSE-VAL1 corpus has more evenly distributed senses (Gaustad, 2001).
|
contrasting
|
train_20114
|
In a manually built corpus, a coreference chain can include pronouns and common nouns that refer to the person.
|
these forms could not be automatically identified, so coreference chains in our corpus only include noun phrases that contain at least one word from the name.
|
contrasting
|
train_20115
|
In this paper we describe CarmelTC , a novel automatic essay grading approach using a hybrid text classification technique for analyzing essay answers to qualitative physics questions inside the Why2 tutorial dialogue system (VanLehn et al., 2002).
|
to many previous approaches to automated essay grading (Burstein et al., 1998;Larkey, 1998), our goal is not to assign a letter grade to student essays.
|
contrasting
|
train_20116
|
In general, the performance of existing speech recognition systems, whose designs are predicated on relatively noise-free conditions, degrades rapidly in the presence of a high level of adverse conditions.
|
a recognizer can provide good performance even in very noisy background conditions if the exact testing condition is used to provide the training material from which the reference patterns of the vocabulary are obtained, which is practically not always the case.
|
contrasting
|
train_20117
|
Thus, a system could map both verbs wipe and remove onto the same action scheme.
|
the apparently equivalent transformations from (1a) to (1b) and from (2a) to (2b) show otherwise.
|
contrasting
|
train_20118
|
One may say that it is not semantically ambiguous.
|
simple algorithms such as maximal matching [6,9] and longest matching [6] may not be able to discriminate this kind of ambiguity.
|
contrasting
|
train_20119
|
Within this framework, any type of feature can be used, enabling the system designer to experiment with interesting feature types, rather than worry about specific feature interactions.
|
in a rule based system, the system designer would have to consider how, for instance, a WordNet (Miller, 1995) derived information for a particular example interacts with a part-of-speech-based information and chunking information.
|
contrasting
|
train_20120
|
Since the documents were selected by subject, one may argue that the task of clustering entities will be much easier if the entities are clearly from different genres.
|
if this is true, then it may account for about 85% of the entities in the person-x corpus that occur only in one domain subject.
|
contrasting
|
train_20121
|
The form of the salience metric, and the choice of features that factor into it, is governed by our knowledge about the way speech and gesture work.
|
the penalty function also requires parameters that weigh the importance of each factor.
|
contrasting
|
train_20122
|
This is an uncommon phenomenon, and as such, was penalized highly.
|
anecdotally it appears that the presence of a disfluency makes this phenomenon more likely.
|
contrasting
|
train_20123
|
Their approach of seeking to maximize this probability is similar to the saliencemaximizing approach that we have described.
|
instead of using a parametric salience function, they learn a set of conditional probability distributions directly from the data.
|
contrasting
|
train_20124
|
However, scalability of such systems is a bottleneck due to the heavy cost of authoring and maintenance of rule sets and inevitable brittleness due to lack of coverage in the rule sets.
|
data-driven approaches are robust and the procedure for model building is usually simple.
|
contrasting
|
train_20125
|
Data-driven approaches are robust and provide a simple process of developing applications given the data from the application domain.
|
the reliance on domain-specific data is also one of the significant bottlenecks of data-driven approaches.
|
contrasting
|
train_20126
|
For a system at ability level 0, the odds increase by another factor of 2.72 to 7.39, giving a probability of .88.
|
a system with an ability of -3, would have the even odds decrease by a factor of 2.72 to .369, yielding P sq = .27.
|
contrasting
|
train_20127
|
Second, for obvious reasons the raw-score estimates based on the Easy sets are considerably higher than those based on the Hard sets.
|
table 2 also shows that the standard deviations of the number correct estimates obtained for the Easy sets exceed those of the Hard sets as well (sometimes by over 100%).
|
contrasting
|
train_20128
|
There are many multi-word expressions whose hypernyms are their suffixes, and if some expressions share a common suffix, it is likely to be their hypernym.
|
if a hypernym candidate appears in a position other than as a suffix of a hyponym candidate, the hypernym candidate is likely to be an erroneous one.
|
contrasting
|
train_20129
|
Sentences are generally demarcated by a major fall (or rise) in f0, lengthening of the final syllable, and following pauses.
|
the usefulness of prosodic information in sentenceinternal parsing is less clear.
|
contrasting
|
train_20130
|
Simple statistical tests show that there is in fact a significant correlation between the location of opening and closing phrase boundaries and all of the prosodic pseudo-punctuation symbols described above, so there is no doubt that these do convey information about syntactic structure.
|
adding the prosodic pseudo-punctuation symbols uniformly decreased parsing accuracy relative to input with no prosodic information.
|
contrasting
|
train_20131
|
(2003) consider selection of individual AL methods at run-time.
|
their AL methods are only ever based on single model approaches.
|
contrasting
|
train_20132
|
For example, the categories of the Model 3 Collins parser distinguish between heads, arguments, and adjuncts and they mark some longdistance dependency paths; these distinctions can guide application-specific postprocessors in extracting important semantic relations.
|
state-of-the-art parsing systems based on deep grammars mark explicitly and in much more detail a wider variety of syntactic and semantic dependencies and should therefore provide even better support for meaning-sensitive applications.
|
contrasting
|
train_20133
|
Moreover, a surprising variety of problems are attackable with FSTs, from partof-speech tagging to letter-to-sound conversion to name transliteration.
|
language problems like machine translation break this mold, because they involve massive reordering of symbols, and because the transformation processes seem sensitive to hierarchical tree structure.
|
contrasting
|
train_20134
|
Section 1 informally described the root-to-frontier transducer class R. We saw that R allows, by use of states, finite lookahead and arbitrary rearrangement of nonsibling input subtrees removed by a finite distance.
|
it is often easier to write rules that explicitly represent such lookahead and movement, relieving the burden on the user to produce the requisite intermediary rules and states.
|
contrasting
|
train_20135
|
In arbitrary document collections, such patterns might be too variable to be easily detected by statistical means.
|
research has shown that texts from the same domain tend to exhibit high similarity (Wray, 2002).
|
contrasting
|
train_20136
|
We conjecture that this difference in performance stems from the ability of content models to capture global document structure.
|
the other two algorithms are local, taking into account only the relationships between adjacent word pairs and adjacent sentence pairs, respectively.
|
contrasting
|
train_20137
|
Keller and Lapata's (2003) results suggest that webbased frequencies can be a viable alternative to bigram frequencies obtained from smaller corpora or recreated using smoothing.
|
they do not demonstrate that realistic NLP tasks can benefit from web counts.
|
contrasting
|
train_20138
|
Phone recognition is known to be less accurate than word recognition.
|
the second method can only generate phone strings that are substrings of the pronunciations of in-vocabulary word strings.
|
contrasting
|
train_20139
|
We have catalogued a variety of such relationships, and note here that we believe it could prove useful to address semantic interdependencies among SCUS in future work that would involve adding a new annotation layer.
|
4 in our approach, SCUs are treated as independent annotation values, which has the advantage of affording a rigorous analysis of interannotator reliability (see following section).
|
contrasting
|
train_20140
|
This makes sense in light of the fact that a score is dominated by the higher weight SCUS that appear in a summary.
|
we wanted to study more precisely at what point scores become independent of the choice of models that populate the pyramid.
|
contrasting
|
train_20141
|
First, an SCU is a set of contributors that are largely similar in meaning, thus SCUs differ from each other in both meaning and weight (number of contributors).
|
factoids are semi-formal expressions in a FOPL-style semantics, which are compositionally interpreted.
|
contrasting
|
train_20142
|
The model in (6) is simplistic in that the relationships between the features across the clauses are not captured directly.
|
if two values of these features for the main and subordinate clauses co-occur frequently with a particular marker, then the conditional probability of these features on that marker will approximate the right biases.
|
contrasting
|
train_20143
|
The majority of the main clauses in our data are sentence intitial (80.8%).
|
there are differences among individual markers.
|
contrasting
|
train_20144
|
Knowing the linear precedence of the two clauses is highly predictive of their type: 80.8% of the main clauses are sentence initial.
|
this type of positional information is typically not known when fragments are synthesised into a meaningful sentence.
|
contrasting
|
train_20145
|
Given that different Machine Translation (MT) evaluation metrics are useful for capturing different aspects of translation quality, it becomes desirable to create MT systems tuned with respect to each individual criterion.
|
the maximum likelihood techniques that underlie the decision processes of most current MT systems do not take into account these application specific goals.
|
contrasting
|
train_20146
|
The two hypothesis translations are very similar at the word level and therefore the BLEU score, PER and the WER are identical.
|
we observe that the sentences differ substantially in their syntactic structure (as seen from Parse-Trees in Figure 3), and to a lesser extent in their word-to-word alignments (Figure 1) to the source sentence.
|
contrasting
|
train_20147
|
In (Och and Weber, 1998;Och et al., 1999), a two-level alignment model was employed to utilize shallow phrase structures: alignment between templates was used to handle phrase reordering, and word alignments within a template were used to handle phrase to phrase translation.
|
phrase level alignment cannot handle long distance reordering effectively.
|
contrasting
|
train_20148
|
One is the PRank algorithm, a variant of the perceptron algorithm, that uses multiple biases to represent the boundaries between every two consecutive ranks (Crammer and Singer, 2001;Harrington, 2003).
|
as we will show in section 3.7, the PRank algorithm does not work on the reranking tasks due to the introduction of global ranks.
|
contrasting
|
train_20149
|
In addition, we only want to maintain the order of two candidates if their ranks are far away from each other.
|
we do not care the order of two translations whose ranks are very close, e.g.
|
contrasting
|
train_20150
|
As a result, we cannot use the PRank algorithm in the reranking task, since there are no global ranks or boundaries for all the samples.
|
the approach of using pairwise samples does work.
|
contrasting
|
train_20151
|
It achieves a BLEU score of 31.7% on the Baseline, 32.8% on the Best Feature, but only 32.6% on the Top Twenty features.
|
it is within the range of 95% confidence.
|
contrasting
|
train_20152
|
Much of this research has used databases of speech read by actors or native speakers as training data (often with semantically neutral content) (Oudeyer, 2002;Polzin and Waibel, 1998;Liscombe et al., 2003).
|
such prototypical emotional speech does not necessarily reflect natural speech (Batliner et al., 2003), such as found in tutoring dialogues.
|
contrasting
|
train_20153
|
Although much past work predicts only two classes (e.g., negative/non-negative) (Batliner et al., 2003;Ang et al., 2002;Lee et al., 2001), our experiments produced the best predictions using our three-way distinction.
|
to (Lee et al., 2001), our classifications are contextrelative (relative to other turns in the dialogue), and taskrelative (relative to tutoring), because like (Ang et al., 2002), we are interested in detecting emotional changes across our dialogues.
|
contrasting
|
train_20154
|
Comparing the results of these combined (speech+text) feature sets with the speech versus text results in Table 1, we find that for autotext+speech-ident and all +ident feature sets, the combined feature set slightly decreases predictive accuracy when compared to the corresponding text-only feature set.
|
there is no significant difference between the best results in each table (all-text+speech+ident vs. alltext+ident).
|
contrasting
|
train_20155
|
Comparing these results with the results in Tables 1 and 2, we find that while overall the performance of contextual non-combined feature sets shows a small performance increase over most non-contextual combined or non-combined feature sets, there is again a slight decrease in performance across the best results in each table.
|
there is no significant difference between these best results (alltext+glob-ident vs. all-text+speech+ident vs. alltext+ident).
|
contrasting
|
train_20156
|
Ideally, by training acoustic models on target non-native speech, one would capture its specific characteristics just as training on native speech does.
|
collecting amounts of non-native speech that are large enough to fully train speaker-independent models is a hard and often impractical task.
|
contrasting
|
train_20157
|
Yet, this is acceptable given the small amount of training data for the language model and the conversational nature of the speech.
|
performance degrades significantly for non-native speakers, with a word error rate of 52.0%.
|
contrasting
|
train_20158
|
For example, when referring to bus stops by street intersections, all native speakers in our training set simply used "A and B", hence the word "intersection" was not in the language model.
|
many non-native speakers used the full expression "the intersection of A and B".
|
contrasting
|
train_20159
|
In fact, this is not only true for non-native speakers and lexical entrainment is often described as a negotiation process between the speakers (Clark and Wilkes-Gibbs, 1986).
|
while it is possible for limited-domain system designers to establish a set of words and constructions that are widely used among native speakers, the variable nature of the expressions mastered by non-native speakers make adaptation a desirable feature of the system.
|
contrasting
|
train_20160
|
Partial Path -For the argument identification task, path is the most salient feature.
|
it is also the most data sparse feature.
|
contrasting
|
train_20161
|
On performing the search, we found that the overall performance improvement was not much different than that obtained by resolving overlaps as mentioned earlier.
|
we found that there was an improvement in the CORE ARGUMENT accuracy on the combined task of identifying and assigning semantic arguments, given hand-corrected parses, whereas the accuracy of the ADJUNCTIVE ARGUMENTS slightly deteriorated.
|
contrasting
|
train_20162
|
Although, there was an increase in F 1 score when the language model probabilities were jointly estimated over all the predicates, this improvement is not statistically significant.
|
estimating the same using specific predicate lemmas, showed a significant improvement in accuracy.
|
contrasting
|
train_20163
|
Speech form is familiar to humans, and can convey information effectively (Nadamoto et al., 2001;Hayashi et al., 1999).
|
little electronic information is provided in speech form so far.
|
contrasting
|
train_20164
|
However, little electronic information is provided in speech form so far.
|
there is a lot of information in text form, and it can be transformed into speech by a speech synthesis.
|
contrasting
|
train_20165
|
There are several studies that compare two corpora which have different styles, for example, written and spoken corpora or British and American English corpora, and try to find expressions unique to either of the styles (Kilgarriff, 2001).
|
those studies did not deal with paraphrases.
|
contrasting
|
train_20166
|
This paper has focused only on paraphrasing predicates.
|
there are other kinds of paraphrasing which are necessary in order to paraphrase written language text into spoken language.
|
contrasting
|
train_20167
|
On the one hand, we have three different variants of the single-word based model IBM4.
|
we have two phrase-based systems, namely the alignment templates and the one described in this work.
|
contrasting
|
train_20168
|
This method yielded word classes that offered more robust count approximations for their member words.
|
both methods yielded similar results when embedded in the larger system, and so we will report on the results of using Good-Turing so as to remain more directly comparable to Dagan et al.
|
contrasting
|
train_20169
|
The MaxEnt model selected the temerity as the antecedent of its (salience value: 0.30), preferring it to the correct antecedent the endowment (salience value: 0.10).
|
altaVista found no occurrences of temerity's money or its variants on the web, and thus the unnormalized and normalized counts were 0.
|
contrasting
|
train_20170
|
However, AltaVista found no occurrences of temerity's money or its variants on the web, and thus the unnormalized and normalized counts were 0.
|
endowment's money and its variants had unnormalized and normalized statistics of 1583 and 1.47 × 10 −3 respectively.
|
contrasting
|
train_20171
|
In the cases in which statistics reinforced a wrong answer, no (reasonable) manipulation of statistical features or filters can rescue the prediction.
|
for the cases in which statistics could help, their successful use will depend on the existence of a formula that can capture these cases without changing the predictions for examples that the model currently classifies correctly.
|
contrasting
|
train_20172
|
Since pronouns carry little semantics of their own, resolving them depends almost entirely on context.
|
even though context can be helpful for resolving definite NPs, context can be trumped by the semantics of the nouns themselves.
|
contrasting
|
train_20173
|
For example, sources in a language that is translated to English will consistently use the same terminology, resulting in greater similarity between linked documents with the same native language.
|
sources from radio broadcasts may be transcribed much less consistently than text sources due to recognition errors, so that the expected similarity of a radio broadcast and a text source is less than that of two text sources.
|
contrasting
|
train_20174
|
We refer to the statistics characterizing story pairs with the same source types as source-pair specific information.
|
to the source-specific thresholds used by CMU, we normalize the similarity measures based on the sourcepair specific information, simultaneously with combining different similarity measures.
|
contrasting
|
train_20175
|
SVM-based systems, such as that described in (Joachims, 1998), are typically among the best performers for the categorization task.
|
attempts to directly apply SVMs to TDT tasks such as tracking and link detection have not been successful; this has been attributed in part to the lack of enough data for training the SVM 1 .
|
contrasting
|
train_20176
|
The cosine distance between the word distribution for documents ¡ X and ¡ is computed as: This measure has been found to perform well and was used by all the TDT 2002 link detection systems (unpublished presentations at the TDT2002 workshop).
|
to the Euclidean distance based cosine measure, the Hellinger measure is a probabilistic measure.
|
contrasting
|
train_20177
|
That is, they use patterns to independently discover semantic relationships of words.
|
for infrequent words, these patterns do not match or, worse yet, generate incorrect relationships.
|
contrasting
|
train_20178
|
Given a typical annotation rate of 5,000 words per hour, we estimated that setting up a name finder for a new problem would take four person days of annotation work -a period we considered reasonable.
|
this user's problems were too dynamic for that much setup time.
|
contrasting
|
train_20179
|
Capitalization models presented in most previous approaches are monolingual because the models are estimated only from monolingual texts.
|
for capitalizing machine translation outputs, using only monolingual capitalization models is not enough.
|
contrasting
|
train_20180
|
As one can see, our features are "coarse-grained" (e.g., the language model feature).
|
kim and Woodland (2004) and Roark et al.
|
contrasting
|
train_20181
|
Unlike a real phrase aligner, the NPA need not wait for the training of the translation model to finish, making it possible for parallelization of translation model training and capitalization model training.
|
we believe that a real phrase aligner may make phrase alignment quality higher.
|
contrasting
|
train_20182
|
Given that storing all of these phrases leads to very large phrase tables, many research systems simply limit the phrases gathered to those that could possibly influence some test set.
|
this is not feasible for true production MT systems, since the data to be translated is unknown.
|
contrasting
|
train_20183
|
Table 5 gives the percentage of these which have translations in each of the three training corpora, if we do not use paraphrasing.
|
after expanding the phrase table using the translations of paraphrases, the coverage of the unique test set phrases goes up dramatically (shown in Table 6).
|
contrasting
|
train_20184
|
The computational cost of training the DTs on large quantities of data is comparable to that of training phrase tables on the same data -large but manageable -and increases linearly with the amount of training data.
|
currently there is a major problem with DT training: the low proportion of Chinese-English sentence pairs that can be fully segment-aligned and thus be used for DT training (about 27%).
|
contrasting
|
train_20185
|
The way to show that one graph element does not follow from another is to make the cost of aligning them high.
|
since we are embedded in a search for the lowest cost alignment, this will just cause the system to choose an alternate alignment rather than recognizing a non-entailment.
|
contrasting
|
train_20186
|
Such a sentence is not observed frequently in corpora, and will not be used as clues to generate rules in practice.
|
we frequently observe sentences of the second type in corpora, and our method generates the paraphrases from the verb-verb cooccurrences taken from such sentences.
|
contrasting
|
train_20187
|
S-V V (n, vcon, vpre, arg, arg ) = Parg(n, vcon)P arg (n, vpre)/P (n) 2 S-N V (n, vcon, vpre) = P coord (vcon, vpre) M I(n, vcon, vpre) = P coord (vcon, vpre)/(P (vcon)P (vpre)) Cond(n, vcon, vpre, arg, arg ) = P coord (vcon, vpre, arg, arg )Parg(n|vcon)P arg (n|vpre) /(P arg (n, vpre)P (n)) Rand(n, vcon, vpre, arg, arg ) = random number S-V V was obtained by approximating the probabilities of coordinated sentences, as in the case of BasicS.
|
we assumed the occurrences of two verbs were independent.
|
contrasting
|
train_20188
|
These methods were discussed in (Yang and Pedersen, 1997).
|
with any of the three feature ranking criteria, cross validation showed that selecting all features gave the best average validation performance.
|
contrasting
|
train_20189
|
In general, when the test data is similar to the training data, IG (or CHI) is advantageous over F (Yang and Pedersen, 1997).
|
in this case when the test domain is different from the training domains, F shows advantages for adaptation.
|
contrasting
|
train_20190
|
Moreover, it is often difficult to find experts in these languages both for the expensive annotation effort and even for language specific clues.
|
comparable multilingual data (such as multilingual news streams) are increasingly available (see section 4).
|
contrasting
|
train_20191
|
Generally, the ensemble of classifiers is generated by training on different subsets of data, rather than different features.
|
there is some literature within unstructured classified on combining models trained on feature subsets.
|
contrasting
|
train_20192
|
A recent attempt to combine outputs of different alignments views the combination problem as a classifier ensemble in the neural network framework (Ayan et al., 2005).
|
this method is subject to the unpredictability of random network initialization, whereas ACME is guaranteed to find the model that maximizes the likelihood of training data.
|
contrasting
|
train_20193
|
Also, when we want to combine two models for prediction, finding the Viterbi alignment argmax z p 1 (z | x)p 2 (z | x) is intractable for HMM models (by a reduction from quadratic assignment), and a hard intersection argmax z 1 p 1 (z 1 | x) ∩ argmax z 2 p 2 (z 2 | x) might be too sparse.
|
we can threshold the product of two edge posteriors quite easily: z = {z ij = 1 : We noticed a 5.8% relative reduction in AER (for our best model) by using posterior decoding with a validation-set optimized threshold δ instead of using hard intersection of Viterbi alignments.
|
contrasting
|
train_20194
|
Initializing the HMM with model 1 parameters alleviates this problem.
|
if we jointly train two HMMs starting from a uniform initialization, the HMMs converge to a surprisingly good solution.
|
contrasting
|
train_20195
|
One of the most useful features for the basic matching model is, of course, the set of predictions of IBM model 4.
|
computing these features is very expensive and we would like to build a competitive model that doesn't require them.
|
contrasting
|
train_20196
|
The example in Figure 4 shows another interesting phenomenon: the multi-fertile alignments for not and député are learned even without lexical fertility features (Figure 4b), because the Dice coefficients of those words with their two alignees are both high.
|
the surface association of aurait with have is much higher than with would.
|
contrasting
|
train_20197
|
We have avoided using expensive-to-compute features like IBM model 4 predictions up to this point.
|
if these are available, our model can improve further.
|
contrasting
|
train_20198
|
For example, the antecedent for the pronoun subject they in the first example of work in Table 3 should be ringers, an agent subject that is typical for Sense 1 (exert oneself in an activity).
|
the feature exaction module found the wrong antecedent changes that is an unlikely fit for the intended verb sense.
|
contrasting
|
train_20199
|
This may be due to the indiscriminately high confidence scores; or it could indicate that classifiers, which are geared at distinguishing between known classes rather than detecting objects that differ from all seen data, are not optimally suited to the task.
|
one further disadvantage of this approach is that, as mentioned above, it can only be applied to lemmas with more than one annotated sense.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.