id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_6400
We return to these results later (Section 7.2.3) to offer some reasons as to why this might be the case.
the results using CFORM confirm our hypothesis that canonical forms-which reflect the overall behavior of a verb+noun type-are strongly informative about the class of a token.
contrasting
train_6401
This method is similar to ours, in that it also uses type-based knowledge to determine the class of each token in context.
their method is supervised, whereas our methods are unsupervised.
contrasting
train_6402
In this approach, linguistic-quantitative analyses come as a later step to facilitate the interpretation of discourse types.
the bottom-up approach begins with a linguisticquantitative analysis based on the automatic segmentation of texts into discourse units on the basis of vocabulary distributional patterns.
contrasting
train_6403
In this article, relation disambiguation experiments are only presented for Factotum, given that the others do not readily provide sufficient training data.
the other inventories are discussed because each provides relation types incorporated into the inventory used below for the definition analysis (see Section 3.5).
contrasting
train_6404
The full Cyc KB is proprietary, which has hindered its adoption in natural language processing.
to encourage broader usage, portions of the KB have been made freely available to the public.
contrasting
train_6405
This has the advantage of allowing more training data to be used in the derivation of the clues indicative of each semantic role.
if there were sufficient annotations for particular preposition, then it would be advantageous to have a dedicated classifier.
contrasting
train_6406
Currently, FrameNet roles are always mapped to the same common inventory role (e.g., place to location).
this should account for the frame of the annotation and perhaps other context information.
contrasting
train_6407
His probabilistic model computes the probability of a preposition p given a noun-noun pair n 1 − n 2 and finds the most likely preposition paraphrase p * = argmax p P(p|n 1 , n 2 ).
as Lauer noticed, this model requires a very large training corpus to estimate these probabilities.
contrasting
train_6408
This approach works most of the time for relations such as LOCATION and TEMPORAL because in both English and Romance languages they rely mostly on prepositions indicating location and time and less on underspecified prepositions such as of or de.
a closer look at these relations shows that some of the noun-noun pairs that encode them are not symmetric and this is true for both English and Romance.
contrasting
train_6409
Moreover, each cross-linguistic study requires translated data, which is not easy to obtain in electronic form, especially for most of the world's languages.
more and more parallel corpora in various languages are expected to be forthcoming.
contrasting
train_6410
(103) some(x, mouse(x), def(y, table(y), appear(e 1 , x) ∧ from(e 2 , e 1 , e 3 ) ∧ under(e 3 , u, y))) On this analysis, the first preposition from is transitive, and it takes the PP under the table, or rather the event introduced by this preposition, as an argument.
to ERG, this analysis allows the last argument of a preposition like from to be an event, e 3 , as well as an individual.
contrasting
train_6411
For example, Gapp (1995) defines the area of proximity as the region within ten times the size of the landmark object in each direction.
neither of these approaches consider the effect that the locations of other objects in the scene have on the proximity.
contrasting
train_6412
Finally, Regier and Carlson (2001) developed a vector sum algorithm to compute the applicability of a projective relation between a landmark and a target.
as with previous topological models, none of these models consider the influence of other objects in the context of the landmark target relationship.
contrasting
train_6413
These three chapters will most likely be of least interest to the computational linguistics community.
two topics that might be of interest are the use of wildcard queries by users and spelling correction of queries, discussed in Chapter 3.
contrasting
train_6414
The intermediate referents e d ρ,t in this framework correspond to the traditional notion of compositional semantics (Frege 1892), in which meanings of composed constituents (at higher levels in the HHMM hierarchy) are derived from meanings of component constituents (at lower levels in the hierarchy).
in addition to the referents A graphical representation of the dependencies in the referential semantic language model described in this article (compare with Figure 2).
contrasting
train_6415
These hierarchic regular expressions are defined to resemble expansion rules in a context free grammar (CFG).
unlike CFGs, HHMMs have memory limits on nesting, in the form of a maximum depth D beyond which no expansion may take place.
contrasting
train_6416
These domains also provide an ideal proving ground for a referential semantic language model, because directives in these domains mostly refer to a world model that is shared by the user and the interfaced application, and because the idiosyncratic language used in such domains makes it more resistant to domain-independent corpus training than other domains.
domains such as database query (e.g., of airline reservations), dictation, or information extraction are less likely to benefit from a referential semantic language model, because the world model in such domains is not shared by either the speaker (in database query) or by the interfaced application (in dictation or information extraction), 7 or because these domains are relatively fixed, so the expense of maintaining linguistic training corpora in these domains can often be justified.
contrasting
train_6417
Ordinarily this is thought of as being mediated by syntax, which is covered in this article only through a relatively simple framework of bounded recursive HHMM state transitions.
the bounded HHMM representation used in this paper has been applied (without semantics) to rich syntactic parsing as well, using a transformed grammar to minimize stack usage to cases of center-expansion (Schuler et al.
contrasting
train_6418
2004) and semantic role labeling (Punyakanok, Roth, and Yih 2005), we first pursue a purely data-driven approach where the predicate of a multimodal command and its arguments are determined by classifiers trained on an annotated corpus of multimodal data.
given the limited amount of data available, this approach does not provide an improvement over the grammar-based approach.
contrasting
train_6419
Given the higher chance of error in speech recognition compared to gesture recognition, we focus on processing the speech recognition output to achieve robust multimodal understanding.
these techniques are also equally applicable to gesture recognition output.
contrasting
train_6420
Johnston (1998aJohnston ( , 1998b utilized techniques from natural language processing (unification-based grammars and chart parsers) to extend the unification-based approach and enable handling of inputs with more than one gesture, visual parsing, and more flexible and declarative encoding of temporal and spatial constraints.
to the unification-based approaches, which separate speech parsing and multimodal integration into separate processing stages, Bangalore (2000, 2005) proposed a one-stage approach to multimodal understanding in which a single grammar specified the integration and understanding of multimodal language.
contrasting
train_6421
The brittleness of using a grammar as a language model is typically alleviated by building SLMs that capture the distribution of the user's interactions in an application domain.
such SLMs are trained on large amounts of spoken interactions collected in that domain-a tedious task in itself, in speech-only systems, but an often insurmountable task in multimodal systems.
contrasting
train_6422
The techniques are presented in the context of SLMs, since spoken language interaction tends to be a dominant mode in our application and has higher perplexity than the gesture interactions.
most of these techniques can also be applied to improve the robustness of the gesture recognition component in applications with higher gesture language perplexity.
contrasting
train_6423
In a grammar-based speech-only system, if the language model of ASR is derived directly from the grammar, then every ASR output can be parsed and assigned a meaning by the grammar.
using an SLM results in ASR outputs that may not be parsable by the grammar and hence cannot be assigned a meaning by the grammar.
contrasting
train_6424
Although a grammar could be written so as to be easily portable across applications, it suffers from being too prescriptive and has no metric for the relative likelihood of users' utterances.
in the data-driven approach a weighted grammar is automatically induced from a corpus and the weights can be interpreted as a measure of the relative likelihoods of users' utterances.
contrasting
train_6425
As mentioned earlier, a hand-crafted grammar typically suffers from the problem of being too restrictive and inadequate to cover the variations and extra-grammaticality of users' input.
an n-gram language model derives its robustness by permitting all strings over an alphabet, albeit with different likelihoods.
contrasting
train_6426
It is immediately apparent that the hand-crafted grammar as a language model performs poorly and a language model trained on the collected domain-specific corpus performs significantly better than models trained on derived data.
it is encouraging to note that a model trained on a derived corpus (obtained from combining the migrated out-of-domain corpus and a corpus created by sampling the in-domain grammar) is within 10% word accuracy as compared to the model trained on the collected corpus.
contrasting
train_6427
There is some more recent work on using structured classification approaches to transduce sentences to logical forms (Papineni, Roukos, and Ward 1997;Thompson and Mooney 2003;Zettlemoyer and Collins 2005).
it is not clear how to extend these approaches to apply to lattice input-an important requirement for multimodal processing.
contrasting
train_6428
We adopted the edit-based technique used on speech utterances to improve robustness of multimodal understanding.
unlike a speech utterance, a gesture string has a structured representation.
contrasting
train_6429
However, scalability of such systems is a bottleneck due to the heavy cost of authoring and maintenance of rule sets and inevitable brittleness due to lack of coverage.
data-driven approaches are robust and provide a simple process of developing applications given availability of data from the application domain.
contrasting
train_6430
This can be complex and expensive, involving a detailed labeling guide and instructions for annotators.
in this approach, if data is used, all that is needed is transcription of the audio, a far more straightforward annotation task.
contrasting
train_6431
Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation).
the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity.
contrasting
train_6432
Acquiring the polarity of words and phrases is undeniably important, and there are still open research challenges, such as addressing the sentiments of different senses of words (Esuli and Sebastiani 2006b;Wiebe and Mihalcea 2006), and so on.
what the polarity of a given word or phrase is when it is used in a particular context is another problem entirely.
contrasting
train_6433
The adjective well is considered positive, and indeed it is positive in this context.
the same is not true for the words reason and reasonable.
contrasting
train_6434
1966;Hatzivassiloglou and McKeown 1997), we largely retained their original polarity.
we did change the polarity of a word if we strongly disagreed with its original class.
contrasting
train_6435
In terms of accuracy, this classifier does not perform much worse than the BoosTexter and TiMBL classifiers that use all the neutral-polar features: The SVM word+priorpol baseline classifier has an accuracy of 75.6%, and both the BoosTexter and TiMBL classifiers have an accuracy of 76.5%.
the BoosTexter and TiMBL classifiers using all the features perform notably better in terms of polar recall and F-measure.
contrasting
train_6436
Also, because the set of polar instances being classified is the same for all the algorithms, condition 1 allows us to compare the performance of the polarity features across the different algorithms.
condition 2 is the more natural one.
contrasting
train_6437
For the positive class, precisions and recalls for the word+priorpol baseline range from 63.7 to 76.7.
it is with the positive class that polarity features seem to help the most.
contrasting
train_6438
It follows that a larger lexicon will have a greater coverage of sentiment expressions.
expanding the lexicon with automatically acquired prior-polarity tags may result in an even greater proportion of neutral instances to contend with.
contrasting
train_6439
As can be seen, some of the top features are either too specific (landlocked, airspace), and are thus less reliable, or too general (destination, ambition), thus not indicative and may co-occur with many different types of words.
intuitively more characteristic features of country, like population and governor, occur further down the Table 2 The top 20 most similar words for country (and their ranks) in the similarity list of LIN, followed by the next four words in the similarity list that were judged as entailing at least in one direction.
contrasting
train_6440
The resulting increase in precision was much smaller, of about 2%, showing that most of the potential benefit is exploited in the first bootstrapping iteration (which is not uncommon for natural language data).
computing the bootstrapping weight twice increases computation time significantly, which led us to suggest a single bootstrapping iteration as a reasonable cost-effectiveness tradeoff for our data.
contrasting
train_6441
Table 9 shows that most of the top 10 common features for country-state are now ranked highly for both words.
there are only two common features (among the top 100 features) for the incorrect pair country-party, both with quite low ranks (compare with Table 7), while the rest of the common features for this pair did not pass the top 100 cutoff.
contrasting
train_6442
In recent work on distributional similarity (Curran 2004;Weeds and Weir 2005) a variety of alternative weighting functions were compared.
the quality of these weighting functions was evaluated only through their impact on the performance of a particular word similarity measure, as we did in Section 5.
contrasting
train_6443
For example, Allen's (1995) Natural Language Understanding presented mostly a symbolic approach to NLP, whereas Manning and Schütze's (1999) Foundations of Statistical Natural Language Processing presented an exclusively statistical approach.
there is also a negative side to the wide coverage-it is probably impossible to present material in an order that would satisfy audiences from different backgrounds, in particular, linguists vs. computer scientists and engineers.
contrasting
train_6444
It's probably best not to bother with linguistic software in the early stages of linguistic description.
things change once the descriptive notation has stabilized, and a "linguistic exploration" workflow is established.
contrasting
train_6445
are established, the recognizer when it observes A can conduct a search leading to the maximizing word string W. In the case of an artificial grammar, such as the New Raleigh grammar, the model P(W) is in fact equal to the actual probability P(W).
furthermore, because the set S of word strings W over which we maximize can be listed, the difficulty of the task can be measured approximately (because the acoustic similarity of words is not taken into account) by entropy: because the participants in the ARPA project introduced the false measure "branching factor" (which was the arithmetic mean of the out-of-state branching of their finite-state grammar), we replaced H(W) as a measure of difficulty by perplexity, defined by PP 2 H (W) It turned out that the New Raleigh grammar had approximate perplexity 7 whereas the Resource Management grammar had 2.
contrasting
train_6446
My former colleagues will not tell me: Theirs is a very hush-hush operation!
we did start working on machine translation (MT) in 1987.
contrasting
train_6447
The issue is more important when dealing with small-to-moderate data sets.
even for a 130K test set (Sections 22-24 of the Wall Street Journal corpus, standardly used as a test set in POS-tagging benchmarks), it is useful to know the estimated noise rate, as it is not clear that all reported improvements in performance would come out significant.
contrasting
train_6448
the samples match not because the classifier gets the label right, but because it overuses the same label as the human coder" (Reidsma and Carletta 2008, page 232).
if disagreements are random classification noise (the label of any instance can be flipped with a certain probability), a performance estimate based on observed data would often be lower than performance on the real data, because the noise that corrupted it was ignored by the classifier (see Figure 2(d) therein).
contrasting
train_6449
Learning in the presence of noise is an active research area in machine learning.
annotation noise is different from existing well-understood noise models.
contrasting
train_6450
It seems our method is over 10% below state of the art in precision on the MSR data.
we find that multiword expressions are consistently segmented into smaller words.
contrasting
train_6451
For instance, despite the fact that the sentences He is affected by AIDS and HIV is a virus express closely-related concepts, their similarity is zero in the VSM because they have no words in common (they are represented by orthogonal vectors).
due to the ambiguity of the word virus, the similarity between the sentences The laptop has been infected by a virus and HIV is a virus is greater than zero, even though they convey very different messages.
contrasting
train_6452
Weather forecast generation is probably the closest that NLG comes to a "standard" application domain, and hence seems a good choice for validation studies from this perspective.
though, one could also argue that weather-forecast generators are atypical in that the language they generate tends to be very simple, even by the standards of NLG systems: very limited syntax (which differs from conventional English), very small vocabulary, no real text structure above the sentence level, and so on.
contrasting
train_6453
This procedure identified the same four significant differences in Accuracy as in Table 4-namely, SUMTIME is significantly better than all systems except pCRU-greedy and Template.
it identified only three significant differences in Claritynamely, SUMTIME is significantly better than Template and pCRU-random, and pCRUgreedy is better than pCRU-random (this is a subset of the significant differences in Clarity shown in Table 4).
contrasting
train_6454
In principle, it would have been preferable to ask the forecasters to write texts based on content tuples (the actual input to the systems), but this is not a natural task for forecasters (they write texts from data, not from content tuples).
asking them to rewrite corpus texts meant they were unable to change the content, and focused on lexical choice and syntactic structure, as intended.
contrasting
train_6455
If only texts which communicate the same content at the content tuple level are included in the analysis, then and SE are strongly correlated with clarity judgments (r > 0.95), and BLEU-4 also correlates significantly (Table 8).
no metric correlates significantly with human accuracy judgments under any analysis.
contrasting
train_6456
Both experiments required similar time commitments from similar subjects.
subjects (and the domain experts who facilitated subject recruitment) were much more enthusiastic about testing the effectiveness of a system which they might themselves use; they were less enthused about testing hypotheses about correlations between NLG evaluation metrics.
contrasting
train_6457
But if we are applying multiple hypothesis corrections, then there is a major drawback to including a large number of metrics in the study, which is that this will make it more difficult to find statistically significant correlations.
perhaps it is wrong to use such strict statistics in computational linguistics, and indeed we are aware of many reports in computational linguistics which present one-tailed p-values, do not apply multiple hypothesis corrections, present post hoc analyses as significant, and/or use parametric tests to analyse data which does not have the characteristics assumed by the parametric test.
contrasting
train_6458
Finally, evaluation with metrics is entirely reproducible.
despite the cost-effectiveness and other appealing aspects of automatic metrics in shared tasks, we do not believe that shared tasks in NLG should use automatic metrics as the sole evaluation criterion.
contrasting
train_6459
Because our results suggest that current automatic metrics are not highly accurate predictors of the quality of texts produced by NLG systems, we recommend developers be cautious in using metrics for diagnostic evaluation, and do not use metrics for automatic parameter tuning.
automatic metrics do have a potential advantage in small diagnostic evaluations, which is that they are not influenced by the individual preferences of a small number of human subjects.
contrasting
train_6460
is perhaps the best metric to use for this purpose (out of the ones we investigated).
existing metrics should not be used to evaluate the content of texts.
contrasting
train_6461
One way around this difficulty is to stipulate that all rules must be binary from the outset, as in Inversion Transduction Grammar (ITG) (Wu 1997) and the binary SCFG employed by the Hiero system (Chiang 2005) to model the hierarchical phrases.
the rule extraction method of Galley et al.
contrasting
train_6462
These two binarizations are no different in the translation-model-only decoding described previously, just as in monolingual parsing.
in the source-channel approach to machine translation, we need to combine probabilities from the translation model (TM) (an SCFG) with the language model (an n-gram), which has been shown to be very important for translation quality (Chiang 2005).
contrasting
train_6463
This is primarily of theoretical interest, as we found that they constitute a small fraction of all rules, and removing these did not affect our Chinese-to-English translation results.
non-binarizable rules are shown to be important in explaining existing hand-aligned data, especially for other language pairs such as German-English (see Section 5.2, as well as Wellington, Waxmonsky, and Melamed [2006]).
contrasting
train_6464
Any combination of two subsets of the rule's nonterminals involves the indices for the spans of each subset.
some of the indices are tied together: If we are joining two spans into one span in the new item, one of the original spans' end-points must be equal to another span's beginning point.
contrasting
train_6465
Sometimes requests match each other quite well, suggesting an approach where a new request is matched with an old one, and the corresponding response is reused.
analysis of our corpus shows that this does not occur very often, because unlike response e-mails, request e-mails exhibit a high language variability: There are many customers who write these e-mails, and they differ in their background, level of expertise, and pattern of language usage.
contrasting
train_6466
Hence, what is important when matching a request to a response is the number of (significant) terms in common, rather than their frequency.
when matching a request to a request, or a request to a request-response pair, term frequency would be more indicative of the goodness of the match, as the document also has a request component.
contrasting
train_6467
As argued in the Introduction, when it comes to obtaining information quickly on-line, this option may be preferable to having to wait for a human-generated response.
the document-level approach is an all-or-nothing approach: If there is insufficient evidence for a complete response, then no automated response is generated.
contrasting
train_6468
This is because SC 1 includes sentences from responses whose requests mention different faulty products, for example, monitor, printer, or notebook.
if there are more cases of faulty monitors in the corpus than other faulty products, then requests about repairing monitors will have a higher prediction probability than requests about repairing other products.
contrasting
train_6469
However, if there are more cases of faulty monitors in the corpus than other faulty products, then requests about repairing monitors will have a higher prediction probability than requests about repairing other products.
to prediction probability, SVM reliability reflects its overall performance (on the training data), and is independent of particular requests.
contrasting
train_6470
As seen in cluster SC 2 in Figure 3, it is possible for an SC to be strongly predicted without it being sufficiently cohesive for a confident selection of a representative sentence.
sometimes the ambiguity can be resolved through cues in the request.
contrasting
train_6471
At first glance, using the best method may appear too lenient, as in practice, we cannot always automatically select this method in advance.
these averages also suffer from the fact that in many cases only the Doc-Ret method is applicable, but its performance is poor.
contrasting
train_6472
This is because we hoped that our trial subjects would prefer Sent-Hybrid to Sent-Pred, as the former is designed to better tailor a response to a request.
we cannot determine from this result whether indeed there is no difference between the sentencebased methods, or whether such a difference simply could not be observed from our test sample of at most 80 cases, which constitutes 1.8% of the corpus used in our automatic evaluation (as indicated previously, it would be quite difficult to conduct user studies with a much larger data set).
contrasting
train_6473
Following Lekakos and Giaglis (2007), one approach for achieving this objective consists of applying supervised learning, where a winning method is selected for each case in the training set, all the training cases are labeled accordingly, and then the system is trained to predict a winner for unseen cases.
in our situation, there is not always one single winner (two methods can perform similarly well for a given request), and there are different ways to pick winners (for example, based on F-score or precision).
contrasting
train_6474
For example, for w = 0.5, the precision and recall values of Cluster 16 (Figure 8(b)) translate to F-scores of 0.895 and 0.865 for Doc-Pred and Sent-Pred, respectively, leading to a choice of Doc-Pred.
for w = 0.75, the respective F-scores are 0.897 and 0.914, leading to a choice of Sent-Pred.
contrasting
train_6475
We evaluate the meta-learning system by looking at the quality of the response produced by the method selected by this system, where, as done in Section 4, quality is measured using F-score and precision.
here we employ 5-fold crossvalidation (instead of 10-fold) to ensure that we get a good spread of selected methods in each testing split.
contrasting
train_6476
Note that the baselines do not have an estimated precision because they do not use meta-learning.
for completeness, we implement the practical system for them as well, with a threshold of 0.8 on actual precision.
contrasting
train_6477
Because the results of all the methods are comparable, no learning is required: At each stage of the "cascade of methods," the method that performs best is selected.
to these two systems, our system employs methods that are not comparable, because they use different metrics.
contrasting
train_6478
Such technologies require significant human input, and are difficult to create and maintain (Delic and Lahaix 1998).
the techniques examined in this article are corpus-based and data-driven.
contrasting
train_6479
The operator is assisted in this task by the retrieval results: The system highlights the request-relevant sentences in the ranked responses.
there is no attempt to automatically generate a single response.
contrasting
train_6480
A fine example of this cross-over is Chapter 9, "Kernel-Based Machine Translation," in which a novel approach to estimating translation models is presented.
this promise is not entirely fulfilled, as some contributions either fail to make use of machine learning or are somewhat obscure, unlikely to impact on the mainstream SMT community.
contrasting
train_6481
This is an interesting problem with relevance for commercial MT systems which must avoid nonsensical literal translations of named entities.
the treatment takes the form of a system description and fails to make use of any machine learning, thus feeling somewhat out of place in this book.
contrasting
train_6482
This model maintains a bounded store of complete but unattached constituents as a buffer, and operates on them using a variety of specialized memory manipulation operations, deferring certain attachment decisions until the contents of this buffer indicate it is safe to do so.
(the model described in this article maintains a store of incomplete constituents using ordinary stack-like push and pop operations, defined to allow constituents to be composed before being completely recognized.)
contrasting
train_6483
This is essentially an ambiguity between arc-eager (in-element) and arc-standard (cross-element) composition strategies, as described by Abney and Johnson (1991).
an ordinary (purely arc-standard) parser with an unbounded stack would only hypothesize analysis (b), avoiding this ambiguity.
contrasting
train_6484
Semantic distance can also be measured more explicitly, by using the relations in an ontology as the direct encoding of semantic association.
such approaches have generally been limited to calculating the distance between two individual concepts, rather than capturing the distance between two sets of concepts corresponding to two texts.
contrasting
train_6485
For example, researchers have developed measures of semantic distance between texts that apply distributional distances to concept vectors of frequencies rather than to word vectors (McCarthy 2000;Mohammad and Hirst 2006).
these approaches only make pairwise comparisions between the elements of the concept vectors, and do not take into account the important ontological relations among the concepts.
contrasting
train_6486
In order to capture such relations, other methods have instead integrated distributional information into an ontological method.
such approaches have heretofore been limited to measuring distance between two individual concepts.
contrasting
train_6487
If i ∈ D, b(i) is set to the negative of the normalized demand frequency, −f D (i), since demand is indicated by a value less than zero.
i may be part of both the supply and demand profiles, and then b(i) must be set to the net supply/demand at node i.
contrasting
train_6488
Such methods estimate the appropriate probability distribution over a set of concepts to represent a given bag of nouns as a whole (Li and Abe 1998;Clark and Weir 2002).
such techniques still start with a mapping of each word to all of its immediate concepts.
contrasting
train_6489
Given the Table 3 Average accuracies by the network flow method (NF), Manhattan distance (Man), skew divergence (skew div), and Jensen-Shannon divergence (JS) on different profiles: original ("raw"), Li Our experiments use semantic profiles created directly from the word frequencies, as described earlier.
research has explored the possibility of generalizing this kind of "raw" data to a semantic profile that more appropriately reflects the coherent concepts expressed in the original set of weighted concept nodes.
contrasting
train_6490
The experiment revealed that the stories in the corpus contained multiple mentions of characters (on average, 64 mentions per story, excluding pronouns).
the 14 stories contained only 22 location markers, mostly street names.
contrasting
train_6491
For example, the nature of events such as dropping, stumbling, or recognizing is that they occur instantaneously and, therefore, are achievements.
events such as playing golf or writing an essay last for some time, so they are processes.
contrasting
train_6492
Such situations are referred to as accomplishments.
playing golf or talking on the phone does not imply that the process must end with a specific conclusion and the situation is atelic.
contrasting
train_6493
As we suspected, the relationship is not simple or monotonic.
one can identify distinct peaks at 3, 25, and 100-150 semantic classes.
contrasting
train_6494
We addressed the normalization problem using a constrained linear solver and the crossentropy problem using numerical optimization.
our experiments showed the difference in WSD performance to be less than 1% in each case.
contrasting
train_6495
Using a dictionary in this way provides an objective method for selecting experimental expressions and indicating their gold standard source words.
it results in a data set of blends that are sufficiently established in the language to appear in a dictionary.
contrasting
train_6496
We began by using CELEX, because it contains rich phonological information that some of our features draw on.
in our analysis of the results, we noted that for many expressions the correct candidate pair is not in the candidate set.
contrasting
train_6497
is much higher than for MAC-CONF (118M vs. 34M).
the average for non-source words in the candidate sets is similar across these data sets (11M vs. 9M).
contrasting
train_6498
At first glance, it may appear that this makes the task of source word identification easier for blends, since there is more source word material available to work with.
acronyms have two properties that help in their identification.
contrasting
train_6499
Also, decisions on when references are given within the body of a chapter or postponed to end-of-chapter notes are inconsistent, some technical terms (e.g., 'spurious ambiguity') are not explained at the first point at which they are mentioned, and others (e.g., 'gap degree') are not explained at all.
given that these problems are mostly textual and in general do not impede the reader's understanding, this book serves as a very useful and up to date survey of the burgeoning research area of dependency parsing.
contrasting