id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_6600
For constituent parsing, we initially compare partial outputs that have been built using the same number of actions.
because different output parse trees can contain a different number of unary-reduce actions, some candidate outputs will be completed earlier than others.
contrasting
train_6601
There are some shortcomings of this technique, which we discuss in Rieser and Lemon (2009a).
the data are useful for our purposes because our main interest here is in multimodal presentation strategies (in the presence of some input noise).
contrasting
train_6602
A data analysis shows that all of the wizards are more likely to show a graphic on the screen when the number of database hits is ≥ 4.
none of the wizards strictly follows that strategy.
contrasting
train_6603
Ai, Tetreault, and Litman (2007), for example, show that random models outperform more accurate ones if the latter fail to provide enough coverage.
user simulations used for testing should be more accurate with respect to the data in order to test under realistic conditions (e.g., Möller et al.
contrasting
train_6604
Cluster-based user simulations generate explorative user behavior which is similar but not identical to user behavior observed in the original data.
to the bigram model, where the likelihood of the next user act is conditioned on the previous system action, the likelihood for the cluster-based model is conditioned on a cluster of similar system states (see Equation 2).
contrasting
train_6605
We relate the multimodal score (a variable obtained by taking the average of four questions) 8 to the number of items presented (DB) for each modality, using curve fitting.
to linear regression, curve fitting does not assume a linear inductive bias, but it selects the most likely model (given the data points) by function interpolation.
contrasting
train_6606
The mean performance measures for testing with real users are shown in Table 10.
also see example dialogues in appendix D. there is no significant difference for the performance of the secondary driving task.
contrasting
train_6607
This seems to contradict the previous results in Table 11, which show low error rates for the WOZ data.
this is due to the fact that most of the observations in the WOZ data set are in the region where the predictions are accurate (i.e., most of the dialogues in the WOZ data are over 14 turns long, where the curves converge).
contrasting
train_6608
On the one hand, the greedy algorithm is far quicker computationally (O(n) vs. O(n 2 ) for the Chu-Liu-Edmonds algorithm and O(n 3 ) for Eisner's algorithm).
it may be prone to error propagation when early incorrect decisions negatively influence the parser at later stages.
contrasting
train_6609
As a result, the chance of error propagation is reduced significantly when parsing these sentences.
if this was the only difference between the two systems, we would expect them to have equal accuracy for shorter sentences.
contrasting
train_6610
This behavior can be explained using the same reasoning as above: Shorter dependency arcs are usually created first in the greedy parsing procedure of MaltParser and are less prone to error propagation.
longer dependencies are typically constructed at the later stages of the parsing algorithm and are affected more by error propagation.
contrasting
train_6611
Theoretically, MSTParser should not perform better or worse for arcs of any length.
due to the fact that longer dependencies are typically harder to parse, there is still a degradation in performance for MSTParser-up to 20% in the Dependency arc precision/recall relative to predicted/gold dependency length.
contrasting
train_6612
Adpositions do tend to have a high number of siblings on average, which could explain MSTParser's performance on that category.
adjectives on average occur the furthest away from the root, have the shortest dependency length, and the fewest siblings.
contrasting
train_6613
Feature-based integration is also similar to parse reranking (Collins 2000), where one parser produces a set of candidate parses and a second-stage classifier chooses the most likely one.
feature-based integration is not explicitly constrained to any parse decisions that the first-stage parser might make.
contrasting
train_6614
11 It is thus quite clear that both models have the capacity to learn from features generated by the other model.
it is also clear that the graph-based MST model shows a somewhat larger improvement, both on average and for all languages except Czech, German, Portuguese, and Slovene.
contrasting
train_6615
trust the guide features from MST for longer dependencies (those greater than length 4) and its own base features for shorter dependencies (those less than or equal to length 4).
for dependencies of length greater than 9, the performance of Malt MST begins to degrade.
contrasting
train_6616
If the original rich feature representation of Malt is sufficient to separate the training data, regularization may force the weights of the guided features to be small (as they are not needed at training time).
an on-line learning algorithm will recognize the guided features as strong indicators early in training and give them a high weight as a result.
contrasting
train_6617
One can speculate that it will result in error propagation and, as a result, a large number of parsing errors on long dependencies as well as those close to the root.
if the algorithm is run on data that contains only deterministic local decisions and complex global decisions, such a system might not suffer from error propagation.
contrasting
train_6618
Finding the treewidth of a graph is an NPcomplete problem (Arnborg, Corneil, and Proskurowski 1987).
given a graph of n vertices and treewidth k, a simple algorithm finds the optimal tree decomposition in time O(n k+2 ) (Arnborg, Corneil, and Proskurowski 1987), and a variety of approximation algorithms and heuristics are known for the treewidth problem (Bodlaender et al.
contrasting
train_6619
Our method involves finding the optimal tree decomposition of a graph, which is in general an NP-complete problem.
the relation to tree decomposition allows us to exploit existing algorithms for this problem, such as the linear time algorithm of Bodlaender (1996) for graphs of bounded treewidth.
contrasting
train_6620
Given friction between the moving parts, the balance will have limited sensitivity, so that for some y that is fractionally heavier than x, the difference in weight will fail to register, and similarly for a z that is fractionally heavier than y.
it may well be that there is sufficient difference between x and z for the balance to distinguish between the weight of the two.
contrasting
train_6621
The polarity of these words actually makes some sense in context: Sequels and movies adapted from video games or TV series do tend to be less well-received than the average movie.
these real-world facts are not the sort of knowledge a sentiment classifier ought to be learning; within the domain of movie reviews such facts are prejudicial, and in other domains (e.g., video games or TV shows) they are either irrelevant or a source of noise.
contrasting
train_6622
23 21 By default, SO-CAL assigns a zero to such texts, which is usually interpreted to mean that the text is neither positive nor negative.
in a task where we know a priori that all texts are either positive or negative, this can be a poor strategy, because we will get all of these empty texts wrong: When there are a significant number of empty texts, performance can be worse than guessing.
contrasting
train_6623
The most straightforward way to investigate this problem would be to ask one or more annotators to re-rank our dictionaries, and compute the inter-annotator agreement.
besides the difficulty and time-consuming nature of the task, any simple metric derived from such a process would provide information that was useful only in the context of the absolute values of our −5 to +5 scale.
contrasting
train_6624
There are aspects of the table consistent with our shift model, for instance a general decrease in pos judgments between pos SO 3 and 5 for lower neg SO.
there are a number of discrepancies.
contrasting
train_6625
This may be partially attributed to the fact that the strong/weak designation for this dictionary is defined in terms of whether the word strongly or weakly indicates subjectivity, not whether the term itself is strong or weak (a subtle distinction).
the results suggest that the scale is too coarse to capture the full range of semantic orientation.
contrasting
train_6626
Goldsmith [2010] who also overviews word segmentation).
works that explicitly aim to treat both word segmentation and morpheme segmentation in one algorithm are included.
contrasting
train_6627
Further, there has been the idea that ULM could contribute to various open questions in the field of first-language acquisition (see, e.g., Brent, Murthy, and Lundberg 1995;Batchelder 1997;Brent 1999;Clark 2001;Goldwater 2007).
the connection is still rather vague and even if ULM has matured, it is not clear what implications, if any, this has for child language acquisition.
contrasting
train_6628
This, then, is another reason for pursuing ULM: to be able to provide language technology to language communities lacking the requisite resources.
uLM, at least as understood for the purposes of this survey, requires a written language, which would still exclude a substantial majority of the world's languages (Borin 2009).
contrasting
train_6629
Any intuitively plausible theory of affixation should allow abundant combination of morphemes without respect to their phonological form, which predicts that high LSV/LSE/LSM values should emerge at morpheme boundaries.
there appears to be no reason why the converse should hold-high LSV/LSE/LSM values could emerge in other places of the word as well.
contrasting
train_6630
The group-and-abstract approaches are also characterized by the ubiquitous use of ad hoc thresholds.
there are clear advantages in that they are in principle capable of handling non-concatenative morphology and in that issues of semantics (of stems) are addressed from the beginning.
contrasting
train_6631
The general idea is to exploit the pairwise preferences induced from the data by training on pairs of patterns, rather than independently on each pattern.
given a weight vector α, the score for a pattern x (a candidate answer) is given by the inner product between the pattern and the weight vector: the error function depends on pairwise scores.
contrasting
train_6632
For example most similarity features (FG1) are correlated with BM25(W); for this reason the selection process does not choose a FG1 feature until iteration 9.
some features do not provide a useful signal at all.
contrasting
train_6633
Regardless of these differences, the analysis indicates that in our noisy setting the bag-of-words representation outperforms any individual structured representation.
the bottom part of the table tells a more interesting story: The second part of our analysis indicates that structured representations provide complementary information to the bag-of-words representation.
contrasting
train_6634
In the foreword, Wilks defines ACs as "conversationalists or confidants" that get to know their owners, assist with Internet interactions, provide company and companionship, and build their owner's biography.
when analyzing the necessary conditions for being an AC, Pulman concludes that conversation is not needed, and Boden strongly objects to ACs as confidants because of privacy concerns.
contrasting
train_6635
In the foreword, Wilks states that ACs are not robots but software agents (though later he mentions that they could be furry handbags).
turkle, Bryson, and many others clearly talk about robots.
contrasting
train_6636
Part II begins with an identification of some linguistic shortcomings of Lambek's original grammar and introduces a number of extensions.
to the broad introduction to Lambek categorial grammar given in Part I, Part II delves into more advanced material that is less well established.
contrasting
train_6637
CLIR approaches are in general presented together with their statistical models, whose understanding does not require more than elementary calculus and probability theory.
the book does not present algorithms or data structures to implement the models, so it might not be a sufficient resource to build an effective CLIR system.
contrasting
train_6638
MTurk is becoming a popular one.
as in any scientific endeavor involving humans, there is an unspoken ethical dimension involved in resource construction and system evaluation, and this is especially true of MTurk.
contrasting
train_6639
The acceptability involves comparison with the gold standards, which usually means the manually segmented results.
to existing unsupervised approaches, we want to explore the potential of completely unsupervised approaches.
contrasting
train_6640
Therefore, the limitation on the maximum length really makes a negative impact on ESA.
when the maximum length of input character sequences is further increased to 50 or 100, the results are not very different, as shown in Table 6.
contrasting
train_6641
The exponent in LRV 2.8 1.8 1.4 1.9 words (VE1) was given, the F-measure of VE was 0.77.
the F-measure was 0.57 without a given length (VE2).
contrasting
train_6642
This method provides a principled way to produce utterances matching the linguistic style of a specific corpus (e.g., of an individual author) without any overgeneration phase.
standard PCFG generation methods require a treebank-annotated corpus, and they cannot model context-dependent generation decisions, such as the control of sentence length or the generation of referring expressions.
contrasting
train_6643
The architecture for the PE method is shown in Figure 2.
with the overgenerate-score (OS) method discussed in Section 1.1, parameter estimation models predict generation decisions directly from input personality scores, in the spirit of the approach of Paiva and Evans (2005).
contrasting
train_6644
This agreement is lower than that for rule-based utterances, which could be due to the nature of the personality cues conveyed by PERSONAGE-RB's handcrafted parameters.
this difference could also result from the use of naive judges, which we believe are less consistent in their personality judgments.
contrasting
train_6645
If at least one option satisfies all of the user's requirements, this option can be found efficiently with the SR strategy.
the system does not point out trade-offs among alternatives in cases where no optimal option exists.
contrasting
train_6646
If the database does not contain any KLM flights that also match the user's other preferences (such as preferring direct flights to connecting ones), the system can recognize this conflict and present an explicit trade-off, as in I found a KLM flight but it requires a connection in Amsterdam.
there is a direct flight on BMI.
contrasting
train_6647
If the globally best option in node 7 was perfect (i.e., if it was exactly what the user was looking for), the option in node 7 would dominate all other options, and the rest of the tree would be pruned.
if there is an aspect of the globally best option which does not match the user's ideal, the user will have to make some kind of trade-off.
contrasting
train_6648
At nodes 12 and 13, both constraints arrival-time:good and price:good have to be satisfied.
they are not satisfied and therefore these two nodes are pruned.
contrasting
train_6649
Arguably, mentioning too many attributes of options will also lead to memory overload, which may ultimately reduce user satisfaction.
the system must provide enough information to fully account for what constitutes the trade-off, that is, it must give the reasons why an option is potentially relevant.
contrasting
train_6650
Therefore, they are described as flights with availability in business class.
the justification for the indirect business class flights is that they have an arrival time that matches the user query better.
contrasting
train_6651
In the questionnaire data (see Figure 15) we found a general preference for UMSR-based recommendations on all four evaluation criteria.
only differences between answers to Questions 1 and 4 ("Did the system give the information in a way that was easy to understand?
contrasting
train_6652
We conclude that combining a summarize and refine approach with user modeling is a very promising approach to improving the user experience in terms of achieving higher task success and increasing efficiency.
there are also other parts of the presentation that could be tailored to the user (e.g., in adaptive option clustering).
contrasting
train_6653
Therefore, just as items for constituency parsers encode sets of partial constituency trees, items for dependency parsers can be defined using partial dependency trees.
dependency trees cannot express the fact that a particular structure has been predicted, but not yet built; this is required for grammar-based algorithms such as those of Lombardo and Lesmo (1996) and Kahane, Nasr, and Rambow (1998).
contrasting
train_6654
When described for head automaton grammars (Eisner and Satta 1999), this algorithm appears to be more complex to understand and implement than the previous one, requiring four different kinds of items to keep track of the state of the automata used by the grammars.
this abstract representation of its underlying semantics reveals that this parsing strategy is, in fact, conceptually simpler for dependency parsing.
contrasting
train_6655
Parsing schemata cannot specify control strategies that guide deterministic parsers; schemata work at an abstraction level, defining a set of operations without procedural constraints on the order in which they are applied.
deterministic parsers can be viewed as optimizations of underlying nondeterministic algorithms, and we can represent the actions of the underlying parser as deduction steps, abstracting away from the deterministic implementation details, obtaining a potentially interesting nondeterministic dependency parser.
contrasting
train_6656
This deduction system can easily be converted into a parsing schema by associating adequate semantics with items.
we do not show this here for space reasons, because we would first have to explain the formalism of regular dependency grammars.
contrasting
train_6657
11 Undirected links: Like dependency formalisms, LG represents the structure of sentences as a set of links between words.
whereas dependency links are directed, the links used in LG are undirected: There is no distinction made between heads and dependents.
contrasting
train_6658
Therefore, we assume that strings are extended with this symbol.
position n + 1 corresponds to a dummy word w n+1 that must not be linkable to any other, and is used by the parser for convenience, as in the schema for Yamada and Matsumoto's dependency parser (Section 3.4).
contrasting
train_6659
The definition of this property of structures, called k-ill-nestedness, is more declarative than that of mildly ill-nestedness.
it is based on properties that are not local to projections or subtrees, and there is no evidence that k-ill-nested structures are parsable in polynomial time.
contrasting
train_6660
Actually, TSVM(ENCN) is very similar to CoTrain, and it combines the results of two classifiers in the same way.
the co-training approach can train two more effective component classifiers than those used in TSVM(ENCN).
contrasting
train_6661
For the three monolingual baseline methods (BaseCN1, BaseCN2, and BaseCN3), the BaseCN2 and BaseCN3 methods outperform the BaseCN1 method.
the BaseCN2 and BaseCN3 methods cannot outperform the strong cross-lingual baseline methods (e.g., TSVM(ENCN), SelfTrain(ENCN)), because the Chinese training corpus is automatically collected without human checking and thus about 10% of the reviews are mistakenly labeled.
contrasting
train_6662
Typical aspects that impact system performance such as domain variability, combinations of different resources, and the use of unsupervised approaches are also illustrated.
this chapter may still be considered incomplete as some architectures exploiting advanced machine learning techniques, for example, kernel methods (Moschitti 2004), are not reported.
contrasting
train_6663
It is not meant to be an indepth description of how to build specific NLP applications.
it highlights necessary issues to be aware of in NLP research and application building.
contrasting
train_6664
The difference, however, can be probably increased by extending the window to six or even more words.
the window can also be shortened and the difference in precision might diminish or even be reversed.
contrasting
train_6665
1 The most common is an intellectual history-how I came to make all these wonderful discoveries.
i am never completely happy with my work, as it seems a pale shadow of what i think it should have been.
contrasting
train_6666
The PER is always lower than or equal to the WER.
a shortcoming of the PER is the fact that it does not penalize a wrong word order.
contrasting
train_6667
The contribution of nouns is WER(N) = 1/12 = 8.3%, of verbs is WER(V) = 2/12 = 16.7%, and of adverbs is WER(ADV) = 2/12 = 16.7%.
to WER, the standard efficient algorithms for the calculation of PER do not give precise information about contributing words.
contrasting
train_6668
The same happens with the verb reordering error be.
the noun reordering error Bush is not resolved by the POS-based reorderings of verbs, whereas it is at the correct position in the hierarchical system output.
contrasting
train_6669
For example, Petroni and Serva (2008) use the Levenshtein distance to classify 50 Austronesian languages, and claim that their obtained language phylogeny is "similar" to the results of the comparative method.
close inspection of their classification reveals some puzzling incongruities.
contrasting
train_6670
Interannotator agreement is too low on fine-grained judgments.
for the coarsegrained judgments of more than or less than a day, and of approximate agreement on temporal unit, human agreement is acceptably high.
contrasting
train_6671
The Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993) is the standard training and evaluation corpus for many syntactic analysis tasks, ranging from POS tagging and chunking, to full parsing.
it does not annotate internal NP structure.
contrasting
train_6672
Collins (2003, §3.1.1, §3.2, and §3.3) also describes the addition of distance measures, subcategorization frames, and traces to the parsing model.
these are not relevant to parsing NPs, which have their own submodel, described in the following section.
contrasting
train_6673
Collins (2003, page 602) gives the example Yesterday the dog barked, where conditioning on the head of the NP, dog, results in incorrectly generating Yesterday as part of the NP.
if the model is conditioning on the previous modifier, the, then the correct STOP category is much more likely to be generated, as words do not often come before the in an NP.
contrasting
train_6674
(1994) describe the *PPA* trace used in the Penn Treebank, which is applied to these permanent predictable ambiguities, or as we have called them, indeterminates.
*PPA* is also applied to cases of general ambiguity (those described in the following paragraphs), whereas we would separate the two.
contrasting
train_6675
It does not need further bracketing.
(NP (NNP Bill) (CC and) (NNP Ted) ) the following example does need the NML bracket shown: (NP (DT the) (NML (NNPS Securities) (CC and) (NNP Exchange) ) (NNP Commission) ) Otherwise, its implicit structure would be as follows: The erroneous meaning here is the Securities and the Exchange Commission, rather than the correct the Securities Commission and the Exchange Commission.
contrasting
train_6676
Lists do not need any bracketing.
(NP (NNS cars) (, ,) (NNS trucks) (CC and) (NNS buses) ) This is true even when the conjunction is missing: the entire list may still need to be bracketed before being joined to words outside the list, as shown: Conjunctions between a neither/nor pair do not need any bracketing.
contrasting
train_6677
When there are postmodifiers such as Corp. or Ltd., the rest of the company needs to be separated if it is longer than one word.
the tokens preceding a final adverb should be separated: (NP (NML (NN college) (NNS radicals) ) (RB everywhere) ) Names are to be left unbracketed: (NP (NNP Brooke) (NNP t.) (NNP Mossman) ) numbers, as well as Jr., Sr., and so forth, should be separated: (NP (NML (NNP William) (NNP H.) (NNP Hudnut) ) (NNP III) ) titles that are longer than one word also need to be bracketed separately.
contrasting
train_6678
The idea behind the system was that the rich profiles collected for people could be used in summaries of later news in order to generate informative descriptions.
the collection of information about entities from different contexts and different points in time leads to complications in description generation; for example, past news can refer to Bill Clinton as Clinton, an Arkansas native, the democratic presidential candidate Bill Clinton, U.S. President Clinton, or former president Clinton and it is not clear which of these descriptions are appropriate to use in a summary of a novel news item.
contrasting
train_6679
This body of research assumes a limited domain where the semantics of attributes and their allowed values can be formalized, though semantic representations and inference mechanisms are getting increasingly sophisticated (e.g., the use of description logic: Areces, Koller, and Striegnitz 2008;Ren, van Deemter, and Pan 2010).
siddharthan and Copestake (2004) consider open-domain generation of referring expressions in a regeneration task (text simplification); they take a different approach, approximating the hand-coded domainknowledge of earlier systems with a measure of relatedness for attribute-values that is derived from WordNet synonym and antonym links.
contrasting
train_6680
Further, we aim to generate new references to people by identifying semantic attributes that are appropriate given the context of the summary.
the GREC challenges only require the selection of an existing referring expression from a list.
contrasting
train_6681
As for the hearer-old/hearer-new classification, the syntactic forms of references were a significant indicator, suggesting that the centrality of the character was signaled by journalists using specific syntactic constructs in the references.
unlike the case of familiarity classification, the frequency of mention within and across documents were also significant features.
contrasting
train_6682
These information status distinctions are important when generating summaries of news, as they help determine both what to say and how to say it.
using these distinctions for summarization requires inferring information from unrestricted news.
contrasting
train_6683
This asymmetry shows up in word-based models to a limited extent: In most models the unit of prediction is a word; predictors include n-grams of any size in principle, not just words.
in a class-based model the asymmetry between predictor and predicted is more important: There is no justification for the premise (made, for example, in the Brown model) that the classes that are optimal for predictors are also the classes that are optimal for predictees.
contrasting
train_6684
The α parameter on lines 8-11 and 15-16 indicates that half-and whole-context models are as valuable, or nearly so, as the KN models: The interpolation weight of half/whole-context models is either 0.4 or 0.5.
the Brown class model (line 7) receives a lower weight of 0.2, indicating that it is less valuable in the interpolation with KN.
contrasting
train_6685
As a result, the model's estimates are close to maximum likelihood estimates for frequent events because the maximum likelihood estimator is appropriate in these cases.
the model's estimate of the probability of a word occurring in an unattested context is closer to the estimate of the class-based model.
contrasting
train_6686
(2002) minimize perplexity on the entire training set.
the objective function of our clustering is similarity of halfcontexts to cluster centroids or, more precisely, minimizing the residual sum of squares of differences between half-context vectors and cluster centroids.
contrasting
train_6687
The focus of this article is the comparison of HC and WC classes and our investigations into the context-specific characteristics of history-length interpolation and class-based generalization.
we also want to point out that the clustering algorithm we are using is very efficient, thus removing a potential obstacle to the widespread use of class-based language models.
contrasting
train_6688
Again, all of these studies differ significantly from our own, in their task definition, in their methodology, and in the domain they examine.
we expect this brief summary to serve as a general frame of reference for our own classification results.
contrasting
train_6689
Now if we take the shorthand notation of writing the basis vector in L ∞ (A * × A * ) corresponding to a pair of strings as the pair of strings itself thenb = 1 3 (a, cd) + 1 3 (a, fd) (30) It would thus seem sensible to define multiplication of contexts so that 1 3 (a, cd) • 1 3 (ab, d) = 1 3 (a, d).
we then find showing that this definition of multiplication doesn't provide us with what we are looking for.
contrasting
train_6690
Then L is a probability distribution over A * , because L is positive and L 1 = 1.
â 1 is infinite, because each string x for which L(x) > 0 contributes 1/2 to the value of the norm, and there are an infinite number of such strings.
contrasting
train_6691
LSA performs a singular value decomposition on the matrix of words and documents which brings out hidden "latent" similarities in meaning between words, even though they may not occur together.
pLSA and LDA provide probabilistic models of corpora using Bayesian methods.
contrasting
train_6692
To conclude, the algorithm we suggest next is applied in our experiments on focused entailment graphs.
we believe that it is suitable for any entailment graph whose properties are similar to those of focused entailment graphs.
contrasting
train_6693
Thus, in A3 we observe that in future iterations the strongly connected component expands further and many more wrong edges are inserted into the graph.
in B we see that ILP-Global takes into consideration the global interaction between the four nodes and other nodes of the graph, and decides to split this strongly connected component in two, which improves the precision of ILP-Global.
contrasting
train_6694
We could use this term to normalize the similarity score.
this would only account for unaligned or badly aligned nodes and edges in the labeled sentence while ignoring the unlabeled partner.
contrasting
train_6695
In order to ensure that the latter describe a partial injective function, we enforce the following constraints: We can now write Equation (1) in terms of the variables x ij (which capture exactly the same information as the function σ): Note that Equations (1) and (5) are summations of the same terms.
4 Equation (5) is not linear in the variables x ij as it contains products of the form x ij x kl .
contrasting
train_6696
The FEE of the labeled sentence and the target verb of the unlabeled sentence are presumed identical.
we waive this restriction in Experiment 2, where we acquire annotations for unknown FEEs, that is, predicates for which no manual annotations are available.
contrasting
train_6697
Accordingly these paradigmatic cases will also be the main focus of this survey, although we shall often have occasion to discuss other types of expressions.
to do full justice to indefinite or attributive descriptions, proper names, and personal pronouns would, in our view, require a separate, additional survey.
contrasting
train_6698
Expression s j s m s 0 includes a reference to Mia, that is, it presents Joe's epistemic stance according to Mia, based on what the author says.
expression s j s 0 refers to Joe's perspective only according to the author.
contrasting
train_6699
More support is needed, however, in order to confirm this claim.
such a highly linguistically based approach has its drawbacks as well, because it suffers from limitations regarding its linguistic coverage (mainly syntactic constructions), and its incapability to deal with ambiguity in natural language.
contrasting