id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5900
Our proposal is similar to CIDEr in that we capture consensus information given a set of human references.
we significantly differ in that (i) we use the references to explicitly model object importance, instead of directly comparing the candidates against the references; (ii) we perform word matching in a semantic space using word embeddings rather than surface forms.
contrasting
train_5901
Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors.
two major challenges in modeling such multimodal human language time-series data exist: 1) inherent data non-alignment due to variable sampling rates for the sequences from each modality; and 2) long-range dependencies between elements across modalities.
contrasting
train_5902
(2018a) proposed a joint framework for generating reference reports and performing disease classification at the same time.
this method was based on a single-sentence generation model (Xu et al., 2015), and obtained low BLEU scores.
contrasting
train_5903
Hsu has observed that people often replace "determiner/article + noun" phrases (e.g., "a boy") with pronouns (e.g., "he") in AREL stories (2019).
this observation cannot explain the story lengthening in GLAC, where each story on average has an increased 0.9 nouns after editing.
contrasting
train_5904
(2016), which suggests that METEOR could be useful when comparing among stories generated by the same visual storytelling model.
when comparing among machine-edited stories (row y and |), among pre-and post-edited stories (row z and }), or among any combinations of them (row~, and ), all metrics result in weak correlations with human judgments.
contrasting
train_5905
Recognizing relationships between entities is non-trivial because the space of possible relationships is immense, and because there are O(n 2 ) relationships possible when n objects are present in an image.
while image captions are easier to obtain, they are often not ⇤ Equal Contribution completely descriptive of an image (Krishna et al., 2017).
contrasting
train_5906
If there are no confluence states in the common prefix, then the method of adding the rest of the word does not differ from the method used in the algorithm for sorted data.
we need to withdraw (from the register) the last state in the common prefix path in order not to create cycles.
contrasting
train_5907
For the sake of presentational convenience, the above describes a construction working on the complete grammar.
our implementation applies the construction separately for each nonterminal in a set Ni such that recursive(Ni) = self, which leads to a separate subautomaton of the compact representation (Section 3).
contrasting
train_5908
The operation of a traditional left-to-right transducer can be simulated by a head transducer by starting at the leftmost input symbol and setting the positions of the first transition taken to a = 0 and fl = 0, and the positions for subsequent transitions to o~ = 1 and fl = 1.
we can illustrate the fact that head transducers are more expressive than left-to-right transducers by the case of a finite-state head transducer that reverses a string of arbitrary length.
contrasting
train_5909
The definition of head transducers as such does not constrain these.
for a dependency transduction model to be a statistical model for generating pairs of strings, we assign transition weights that are derived from conditional probabilities.
contrasting
train_5910
This results in bitexts in which the number of multicharacter Japanese "words" is at most the number of English words.
as noted above, evaluation of the Japanese output is done with Japanese characters, i.e., with the Japanese text in its natural format.
contrasting
train_5911
Instead of trying to improve upon determinization techniques for such automata, it might be more fruitful to try to improve these approximation techniques in such a way that more compact automata are produced.
1 because research into finite-state approximation is still of an exploratory and experimental nature, it can be argued that more robust determinization algorithms do still have a role to play: it can be expected that approximation techniques are much easier to define and implement if the resulting automaton is allowed to be nondeterministic and to contain c-moves.
contrasting
train_5912
No consensus has been reached on the matter to the best of the author's knowledge.
our proposed n-tape machines move all the read heads simultaneously ensuring finite-state expressive power.
contrasting
train_5913
Since these two operations are very similar, one would not expect much of a difference if the elements in each sublexicon were the same.
the different formal renderings of roots and patterns in the two approaches results in a significant difference.
contrasting
train_5914
Since the planner doesn't know which alternative(s) is/are available, it can't choose between them; the linguistic component must make the choice.
the decision has to be made by the planner, since it can depend on and/or affect the goals the generator is trying to achieve.
contrasting
train_5915
IGEN in some ways resembles a blackboard architecture (Nii 1986b(Nii , 1986a; like a blackboard, its workspace contains several possible utterances that are described at different levels of structure and are built up incrementally by various independent components.
it doesn't have the strict hierarchical structure that usually characterizes blackboard systems; each request from the planner may correspond to several linguistic options, and the linguistic options may handle (parts of) more than one request.
contrasting
train_5916
In general, graphical objects, functioning as constant terms or as variables, introduced as antecedents or as pronouns, cannot be expressed in a DRS, since the rules constructing these structures are triggered by specific syntactic configurations of the natural language in which the information is expressed.
this limitation can be overcome if graphical information can be expressed in a language with well-defined syntax and semantics.
contrasting
train_5917
Performing a multimodal reasoning process is possible if the translation relation between expressions of different modalities is available.
for particular multimodal reasoning tasks, the translation relation between individual constants of different modalities cannot be stated beforehand and has to be worked out dynamically through a deictic inferential process, as will be argued in the rest of this paper.
contrasting
train_5918
Similarly, the expression inside(,~P3y[region(y)/~ P(y)])(z) denotes that the dot z is inside a region y, but the first argument denotes the set of properties P that the region has, rather than denoting y directly.
whenever the full interpretation of these expressions in relation to a finite domain of graphical objects is required, they must be transformed into equivalent first-order expressions.
contrasting
train_5919
Other examples are expressions of the form &P[P(el) where ei is an individual constant, which denote the set of properties that one or another individual has.
this latter kind of expression could be translated if the expressiveness of L were augmented by allowing conjoined term phrases in the grammar.
contrasting
train_5920
Consider, for instance, reading a book with words and pictures: when the associations between textual and graphical symbols are realized by the reader, the message as a whole has been properly understood.
it cannot be expected that such an association can be known in advance.
contrasting
train_5921
Consider that linguistic terms serve to identify individuals, and whenever they are used, the individual they denote should exist.
as pointed out by Donnellan and commented on by Kaplan, "using a definite description referentially a speaker may say something true even though the description correctly applies to nothing" (Kaplan 1978).
contrasting
train_5922
There may be constants of P that cannot be translated into L as proper names.
consider, for instance, the line 11 in Figure 4; the translation of this line into G is &P[P(I1)], but as can be seen in Figure 14, there is no proper name that corresponds to this expression in L. this constant could be translated by rule TIp_G (b) as the denoting concepts A/THE(line).
contrasting
train_5923
specified by Computational Linguistics.
in certain cases it is useful to include as much information as possible given the size limit; for example, I want to convey as much information as possible about my research in the allowed eight pages.
contrasting
train_5924
Furthermore, it is important to communicate as much information as possible subject to the size constraint; this is a characteristic that the system tries to optimize.
it is even more important that leaflets be easy to read, and size optimization should not be at the expense of readability.
contrasting
train_5925
This fits on two A5 pages, as desired.
if "bad for your health" in the paragraph just below the graphic were changed from italic face to bold face, then this paragraph would require four lines instead of three lines.
contrasting
train_5926
The best results of all are in revision mode, although the increase in size over a four-solution pipeline (385 words versus 380 words) is small.
revision mode also is robust in the face of increased data set size (we can be confident that the size constraint will be satisfied even on a set of a million questionnaires) and "last-minute" changes to the code.
contrasting
train_5927
As expected, processing time is lowest for the single-solution pipeline and highest for revision mode.
in the context of STOP, even the 9.8 seconds required in revision mode is acceptable; under this mode a batch of 100 leaflets can still be generated in under 20 minutes.
contrasting
train_5928
Revision mode does better than the multiple-solution pipeline, but only slightly.
revision mode is robust in the face of increased data set size and changes to the code.
contrasting
train_5929
For example, in a speech system a word-level analysis component may pass several word hypotheses to a language model; and in a natural language analysis system, a morphology system may pass several possible analyses of a surface form word to a parser.
multiple-solution pipelines have not received a great deal of attention in the NLG community.
contrasting
train_5930
For example, an important constraint in STOP is that texts should be easy to read for poor readers.
the only computational mechanism we are aware of for measuring reading difficulty is reading-level formulas (such as Flesch Reading Ease), whose accuracy is doubtful (Kintsch and Vipond 1979).
contrasting
train_5931
The authors write further, "A valid alternative analysis may involve a structural topic position for sentential subjects.
... the development of such an alternative analysis presupposes a large amount of linguistic research outside the scope of the Par-Gram collaboration" (p. 99).
contrasting
train_5932
The second shows the leading paragraphs of newspaper articles being better than automatic summaries.
the third illustrates genuine utility from summarization.
contrasting
train_5933
Natural language constructions can require even more complex forms such as anbnc n, which in turn must be defined using indexed grammars and parsed with a machine that has the equivalent of multiple stacks of memory.
"suppose that we had access to hardware that would handle FSTNs [finite-state transition networks] ... ultrafast and observed that in actual occurrences of anb n constructions, the value of n never exceeded 3; then we might decide to compile our RTN [recursive transition network] ... descriptions down into FSTNs subject to an n = 3 upper bound (such a compilation is possible for any given finite upper bound on the value of n)" (Gazdar and Mellish 1989).
contrasting
train_5934
After identifying some fundamental problems in SFG theory, the author explains that these problems have been fully solved in HPSG.
rather than abandoning SFG and adopting HPSG, the author prefers to try to integrate the HPSG-style solutions into the existing SFG theory.
contrasting
train_5935
For both statistical tests, recall seems to be optimal at window size 2.
at this window size, the number of words extracted is very small.
contrasting
train_5936
The numbers above the panels indicate the A-B distribution of the contingency tables in Table 2. log-likelihood ratio because the extreme saw-tooth-shaped pattern is substantially reduced.
the use of Fisher's exact test does not eliminate the effect of the choice of window and complement size on the number of significant words and recall.
contrasting
train_5937
Extracting such particle-verb combinations is relatively straightforward.
when the particle follows the verb, it may be separated from the verb by many constituents of arbitrary complexity: Hij zegt de belangrijke afspraak met de programmeur voor vanmiddag af ('he says the important meeting with the programmer for this afternoon off'; i.e., he cancels the meeting).
contrasting
train_5938
Both sentences exhibit both quantifier scopings: a.
~xVy(love y x) b. Vy3x(love y x) while the dominant reading of (7a) is (8a), that of (7b) is (8b), i.e., the preference is for the first quantifier to have wider scope.
contrasting
train_5939
Each sequent has a distinguished category formula (underlined) on which rule applications are keyed: In the regulated calculus there is no spurious ambiguity, and provided there is no explicit or implicit antecedent product, i.e., provided .L is not needed, F ~ A is a theorem of the Lambek calculus iff F ~ A is a theorem of the regulated calculus.
apart from the issue regarding .L, there is a general cause for dissatisfaction with this approach: it assumes the initial presence of the entire sequent to be proved, i.e., it is in principle nonincremental; on the other hand, allowing incrementality on the basis of Cut would reinstate with a vengeance the problem of spurious ambiguity, for then what are to be the Cut formulas?
contrasting
train_5940
3 For zeroth-order (unigram) discourse grammars, Viterbi decoding and forwardbackward decoding necessarily yield the same results.
for higher-order discourse grammars we found that forward-backward decoding consistently gives slightly (up to 1% absolute) better accuracies, as expected.
contrasting
train_5941
Our paradigm thus follows historical practice in the Switchboard domain, where the goal is typically the off-line processing (e.g., automatic transcription, speaker identification, indexing, archival) of entire previously recorded conversations.
the HMM formulation used here also supports computing posterior DA probabilities based on partial evidence, e.g., using only the utterances preceding the current one, as would be required for on-line processing.
contrasting
train_5942
As expected, we see an improvement (decreasing perplexities) for increasing n-gram order.
the incremental gain of a trigram is small, and higher-order models did not prove useful.
contrasting
train_5943
The first alternative approach was a standard cache model (Kuhn and de Mori 1990), which boosts the probabilities of previously observed unigrams and bigrams, on the theory that tokens tend to repeat themselves over longer distances.
this does not seem to be true for DA sequences in our corpus, as the cache model showed no improvement over the standard N-gram.
contrasting
train_5944
As we have shown here, such models offer some fundamental advantages, such as modularity and composability (e.g., of discourse grammars with DA models) and the ability to deal with noisy input (e.g., from a speech recognizer) in a principled way.
many other classifier architectures are applicable to the tasks discussed, in particular to DA classification.
contrasting
train_5945
Finally, we developed a principled way of incorporating DA modeling into the probability model of a continuous speech recognizer, by constraining word hypotheses using the discourse context.
the approach gives only a small reduction in word error on our corpus, which can be attributed to a preponderance of a single dialogue act type (statements).
contrasting
train_5946
For example, if the character b were to follow tobeornottobe it would be encoded with probabilities (1/2,1/2, 3/26), without exclusion, leading to a coding requirement of 5.1 bits.
if exclusion was exploited, both encoder and decoder will recognize that escape from order 1 to order 0 is inevitable because the order 1 model adds no characters that were not already predicted by the order 2 model.
contrasting
train_5947
For example, both results in the first row are incorrect: no space should be inserted in this case, and the four characters should stand together.
the order 3 result is to be preferred to the order 5 result because both two-character words do at least make sense individually, whereas the initial three characters in the order 5 version do not represent a word at all.
contrasting
train_5948
Of course, full-text indexes can be built from individual characters rather than words.
these will suffer from the problem of low precision--searches will return many irrelevant documents, where the same characters are used in contexts different from that of the query.
contrasting
train_5949
The word -~ ("also") is another frequent character, appearing 4,553 times in the PH corpus.
in 4,481 of those cases it appears by itself and contributes little to the meaning of the text.
contrasting
train_5950
Despite their formal elegance, implementations of these theories cannot yet handle naturally occurring texts, such as that shown in (1).
the theories aimed at characterizing the constraints that pertain to the structure of unrestricted texts and the computational mechanisms that would enable the derivation of these structures (van Dijk 1972;Zock 1985;Grosz and Sidner 1986;Mann and Thompson 1988;Polanyi 1988Polanyi , 1996Hobbs 1990) are either too informal or incompletely specified to support a fully automatic approach to discourse analysis.
contrasting
train_5951
The best that we can do in this case is to make an exclusively disjunctive hypothesis, i.e., to hypothesize that one and only one of these possible relations holds.
it is still unclear what the elements of such an exclusively disjunctive hypothesis should be.
contrasting
train_5952
Let us focus our attention again on text (4).
we have seen that a computer program may be able to hypothesize the first exclusive disjunction in (6) using only knowledge about the discourse function of the connective .
contrasting
train_5953
After all, one can use the annotated data to derive such information automatically.
during my prestudy of cue phrases, I noticed that there is a finite number of ways in which cue phrases can be used to identify the elementary units of text.
contrasting
train_5954
In fact, some psycholinguistic and empirical research of Heurley (1997) and Hearst (1997) indicates that paragraph breaks do not always occur at the same locations as the thematic boundaries.
experiments of Bruder and Wiebe (1990) and Wiebe (1994) show that paragraph breaks help readers to interpret private-state sentences in narratives, i.e., sentences about psychological states such as wanting and perceptual states such as seeing.
contrasting
train_5955
For example, if the cue phrase Besides occurs at the beginning of a sentence and is not followed by a comma, as in text 9, it usually signals a rhetorical relation that holds between the clause-like unit that contains it and the following clause(s).
if the same cue phrase occurs at the beginning of a sentence and is immediately followed by a comma, as in text (10), it usually signals a rhetorical relation that holds between the sentence to which Besides belongs and a textual unit that precedes it.
contrasting
train_5956
Usually, the discourse role of the cue phrases and and or is ignored because the surface-form algorithm that we propose is unable to distinguish accurately enough between their discourse and sentential usages.
lines 8-9 of the algorithm concern cases in which their discourse function can be unambiguously determined.
contrasting
train_5957
It is good news and bad news: people who need to build a simple question-answering system or talking robot could find the suggested approach useful.
the book does not provide any deep linguistic discussion, considering mostly the John loves Mary kind of artificial examples.
contrasting
train_5958
Second, the complexity of a grammar class is measured by the worst case: a grammar class has a complexity x if there exists some grammar in this class such that there exists an infinite series of long-enough sentences that parse in time x by this grammar.
what matters in engineering practice is the average case for a specific grammar.
contrasting
train_5959
This is due to the lack of a formal definition of style as well as to the inability of current NLP systems to incorporate stylistic theories that require complicated information.
to traditional stylistics based on formal linguistic theories, the use of statistical methods in style processing has proved to be a reliable approach (Biber 1995).
contrasting
train_5960
This parser produces trees to represent the structure of the sentences that compose the text.
it is set to "skip" or surrender attempts to parse clauses after reaching a time-out threshold.
contrasting
train_5961
Recently, Yang (1999) studied the performance of several classifiers on text categorization tasks and concluded that all the tested methods perform comparably when the training set comprises over 300 instances per category.
when the number of positive training instances per category is small (less than 10) a regressionlike method called linear least-squares fit and k-nearest neighbors outperform neural networks and naive Bayes classifiers (Yang and Liu 1999).
contrasting
train_5962
It must also be pointed out that the last three text genres (i.e., G08, G09, and G10) refer to spoken language that has been transcribed either before (i.e., planned speeches, broadcast news) or after (i.e., interviews) it has been uttered.
g01 to g07 refer to written language.
contrasting
train_5963
For group A, there are significant differences in the accuracy of the two techniques.
three authors (A01, A03, and A06) are responsible for approximately 50% of the average error rate, probably because the average text length of these authors is relatively short, i.e., shorter than 800 words (see Table 5).
contrasting
train_5964
The performance of VR is not significantly affected by increasing the training set size.
the identification error rate of CWF-30, CWF-50, and that of our approach is generally reduced by increasing the number of texts used for training.
contrasting
train_5965
Our analysis is similar in spirit.
at QLF we represent the semantics of the elliptical sentence not with a free variable but by using the construct vpEIlipsis .......
contrasting
train_5966
It does not assign a full meaning to QLF constructs like he or every in isolation, but only in a context.
the coherence of such an approach is dependent on how fine-grained our notion of context is made to be.
contrasting
train_5967
(The CLE-QLF supervaluation semantics falls prey to this problem.)
if we know both readings are true, then we can safely assert the ambiguous expression (2).
contrasting
train_5968
One advantage of linguistic indicators is that they can be measured automatically.
individual linguistic indicators are predictively incomplete, and are therefore insufficient when used in isolation.
contrasting
train_5969
For example, stativity must be identified to detect temporal constraints between clauses connected with when.
for example, in interpreting, in interpreting, Phototherapy was discontinued when the bilirubin came down to 13. the discontinue event began at the end of the come event: Such models also predict temporal relations between clauses combined with other connectives such as before, after, and until.
contrasting
train_5970
1997), produced equivalent results.
this may be due to the limited size of our supervised data.
contrasting
train_5971
Note that, as for stativity, this classification performance was attained with a subset of only five linguistic indicators: no subject, frequency, temporal adverb, perfect, and not progressive.
(only two of these appeared in the example function tree for stativity shown in Figure 2: frequency and progressive.)
contrasting
train_5972
Such theoretical elegance appears at first examination to benefit efficiency at implementation time, which is what attracts computational linguistics to these approaches.
the difficulties encountered in trying to extend their coverage at the same time delimit the level of their usefulness for system development, while helping the theorists further the instantiation of the paradigms implied by their theories.
contrasting
train_5973
The softening of these consonants is indicated by the -i that precedes the canonical Loc.Sg.
ending -e. for our purposes it is more straightforward to consider the Loc.Sg.
contrasting
train_5974
Unlike the substitution operation, where an entire tree is inserted below the argument node, with adjunction, only a subtree of ~ appears below the argument node; the remainder appears in its entirety above the root node of ft.
if we view the trees as descriptions, as in Figure 2, and if we take the expansion of the foot node as the main goal served by adjunction, it is not clear why the composition should have anything to say about the domination relationship between the other parts of the two objects being combined.
contrasting
train_5975
As we said above, a frontier node of a component of a d-tree can be a d-parent but not an i-parent, and only frontier nodes of a component can serve as d-parents.
by definition, a frontier node of a d-tree can neither be a d-parent nor an i-parent.
contrasting
train_5976
Normally, substitution is used in LTAG to associate a complement to its head, and adjunction is used to associate a modifier.
adjunction rather than substitution must be used with complements involving long-distance dependencies, e.g., in wh-dependencies and raising constructions.
contrasting
train_5977
(1995) for compiling an HPSG fragment to TAG-like structures.
to traditional TAG analyses (in which the elementary tree contains the preposition and its PP, with the NP complement of the preposition as a substitution node), the PP argument of the ditransitive verb is not expanded.
contrasting
train_5978
(We return to analyses using multicomponent TAG in Section 4.2.)
we now show that all of these cases can be captured uniformly with generalized substitution (see Figure 17).
contrasting
train_5979
Specifically, there are constructions (which do not involve long-distance phenomena) for which one of the most widely developed and comprehensive theories for determining the nature of localization in elementary trees--that of Frank (1992)---calmot be used because of the nature of the TAG operation of adjunction.
the operations of DSG allow this theory of elementary lexical projections to be used.
contrasting
train_5980
As mentioned previously, while we can derive any ordering, we cannot, in DSG, obtain a flat VP structure.
our analysis has an advantage when we consider "long scrambling," in which arguments from two lexical verbs intersperse.
contrasting
train_5981
If the special mechanism produces a single (flat) VP for the new argument list, then LP rules for the simplex case can also apply to the complex case.
the DSG analysis has the advantage that it does not involve a special mechanism, and the difference between German and English complex clauses is related simply to the difference in word orders allowable in the simplex case (i.e., German but not English allows scrambling).
contrasting
train_5982
Furthermore, like DSG, TDG does not allow the conflation of immediate dominance structure specified in elementary structures.
tDG allows for more than one node to be equated in a derivation step: nodes are "marked" and all marked nodes are required to be equated with other nodes in a derivation step.
contrasting
train_5983
The book contains material that will be of value especially to experts in this field.
most of the papers in the volume will also be relevant to researchers from other branches of computational linguistics who are interested in theoretical aspects of the computation of meaning in natural language.
contrasting
train_5984
The length of the stems and suffixes in question clearly plays a role: suffixes of one letter are, all other things being equal, suspicious; the pair of stems Ioo and boo, appearing with the signature k.t, does not provide an example of a convincing linguistic pattern.
if the suffix is long enough, even one stem may be enough to motivate a signature, especially if the suffix in question is otherwise quite frequent in the language.
contrasting
train_5985
The changes are mainly cosmetic, e.g., nonalphabetic characters such as "$" in tag names have been replaced.
there has also been some retokenization: genitive markers have been split off and the negative marker n't has been reattached.
contrasting
train_5986
The final model has a weighting parameter for each feature value that is relevant to the estimation of the probability P(tag I features), and combines the evidence from diverse features in an explicit probability model.
to the other taggers, both known and unknown words are processed by the same 17 Because of the computational complexity, we have had to exclude the system from the experiments with the very large Wotan tagset.
contrasting
train_5987
For MBL, this is much less clear.
it would seem that the accuracy level at 1M words is a good approximation of the eventual ceiling.
contrasting
train_5988
As we have seen, the WotanLite taggers undeniably have a much higher accuracy than the Wotan ones.
this is hardly surprising, as they have a much easier task to perform.
contrasting
train_5989
Here, higher granularity on the part of the ingredients is preferable, as combiners based on Wotan taggers perform better than those based on WotanLite taggers, 3° and ingredient performance seems to be even more useful, as BestLite yields yet better results in all cases.
30 this comparison is not perfect, as the combination of Wotan tags does not include TBL.
contrasting
train_5990
While this will perhaps result in somewhat artificially inflated scores, there may be no reason to think that it would benefit one system or approach more than another, and this concession might seem reasonable considering the fact that there is likely to be no completely foolproof way to perform alignments.
the potential harm that this behavior manifests in terms of the technology development process--which, to our knowledge, has never been brought to light--is that it creates a situation in which uncontroversially positive changes in system output may result in a dramatically worse score, and likewise negative changes may result in a dramatically better score.
contrasting
train_5991
This is the anticipated result, and would constitute a large increase over the zero F-score that the overgenerated event should have received when standing alone.
this is a substantial reduction from the F-score of 0.571 that the overgenerated event actually receives, a change which implicitly instructs the developer to remove the rules responsible for extracting the correct event.
contrasting
train_5992
It is difficult to answer this question, of course, since one cannot replay past phases of technology development.
we do have a case study with which to investigate this question, as we have previously performed a 13 1/2-month effort in which we focused on improving performance on the MUC-6 task.
contrasting
train_5993
In most cases, the effect was to reduce the score assigned by the standard algorithm, since the best-scoring (albeit incorrect) alignment found by the standard algorithm was often disallowed by the restrictive algorithm.
due to the fact that the heuristic search process used for alignment is less likely to find the optimal mapping with the standard criterion, there were also cases in which the effect was to increase the score.
contrasting
train_5994
It is important to note that the same spurious parses also occur in context-free parsing, specifically in Earley's algorithm.
since the only information a constituent carries in context-free grammar is the grammar symbol, the spurious derivations only produce exactly the same results as the normal ones.
contrasting
train_5995
These are known techniques, which have been applied to solve other problems in unification-based systems.
most of them only offer partial solutions to the nonminimal derivation problem.
contrasting
train_5996
This book is "intended for anyone whose job is to analyze knowledge," and practitioners will find it useful.
it is also designed for the student, and includes an extensive set of exercises at the end of each chapter and answers to selected exercises at the end of the book.
contrasting
train_5997
I think that the authors could have addressed this aspect by stressing the fact that the tasks to be accomplished by a generation system are of much greater variety than those needed for natural language understanding: • On the one hand, many aspects of pragmatics, which is one of the least understood areas in linguistics, are more important in generation than in analysis and, therefore, they have to be captured to a certain extent.
• the initial representations for a generation system are much more distant from the language surface than the results of understanding systems, and they can appear in various, mostly nonlinguistic forms.
contrasting
train_5998
As for the other topics, it is not surprising that they have not been addressed in the book, given the currently low degree of understanding of these issues.
for progress in system-building techniques, a deeper understanding of these questions will be crucial.
contrasting
train_5999
All papers have been revised, and most expanded.
they are not particularly well integrated in this massive work, which combines patches of brilliance with unnecessary wordiness and repetition--it needed a stern editor.
contrasting