id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_13100
However, inferring the best set of labels for an unlabeled document at test time is more complex: it involves assessing all label assignments and returning the assignment that has the highest posterior probability.
this is not straight-forward, since there are 2 K possible label assignments.
contrasting
train_13101
Because only a processed form of the documents was released, the Yahoo dataset does not lend itself well to error analysis.
only 33% of the documents in each top-level category were applied to more than one sub-category, so the credit assignment machinery of L-LDA was unused for the majority of documents.
contrasting
train_13102
For the supervised keyphrase extraction approach, a document set with human-assigned keyphrases is required as training set.
human labelling is time-consuming.
contrasting
train_13103
In recent years, a number of systems are developed for extracting keyphrases from web documents (Kelleher and Luz, 2005;Chen et al., 2005), email (Dredze et al., 2008) and some other specific sources, which indicates the importance of keyphrase extraction in the web era.
none of these previous works has overall consideration on the essential properties of appropriate keyphrases mentioned in Section 1.
contrasting
train_13104
Our study is related to geographical information retrieval (GIR) systems.
our problem is very far from classic GIR problem settings.
contrasting
train_13105
Most of the tourists going through this path spend weeks in prior information mining and preparations.
even when using the most recent maps and guides, they discover that available geographical knowledge is far from being complete and precise.
contrasting
train_13106
For instance, despite the fact that the sentences "he is affected by AIDS" and "HIV is a virus" express concepts closely related, their similarity is zero in the VSM because they have no words in common (they are represented by orthogonal vectors).
due to the ambiguity of the word "virus" , the similarity between the sentences "the laptop has been infected by a virus" and "HIV is a virus" is greater than zero, even though they convey very different messages.
contrasting
train_13107
This repository of sentences is already divided by sense and can significantly speed-up manual annotation.
the extracted sentences could enrich the training set of machine learning systems for frame annotation to improve the frame identification step.
contrasting
train_13108
Exploiting redirections and anchoring strategies, our induction method can account for orthographical variations, for example it acquires both memorize and memorise.
also misspelled words may be collected, for instance gynaecologial instead of gynaecological.
contrasting
train_13109
On the one hand, the retrieved data could speed up human annotation, requiring only a manual validation.
the extracted sentences could provide enough training data to machine learning systems for frame assignment, since insufficient frame attestations in the FrameNet database are a major problem for such systems.
contrasting
train_13110
If multiple ref-erences were used that Bleu would likely have stronger correlation.
it is clear that the cost of hiring professional translators to create multiple references for the 2000 sentence test set would be much greater than the $10 cost of collecting manual judgments on Mechanical Turk.
contrasting
train_13111
The expert actually gains very little from ds for both rand and unc: adding suggestions gave OP-ERs of just 1.83 and .39, respectively.
the non-expert obtains an improvement of 5.93 OPER when suggestions are used with rand, but performs worse when used with unc (-9.91 OPER).
contrasting
train_13112
Our results suggest some possible prescriptions for tuning techniques according to annotator expertise.
even if we can estimate a relative level of expertise, following such broad prescriptions is unlikely to be more robust than an approach which adapts selection and suggestion to the individual annotator, perhaps working within an annotation group.
contrasting
train_13113
Our results provide validation of several features that can be optimized in the development of new summarization systems when the objective is to improve content selection on average, over a collection of test inputs.
none of the features is consistently predictive of good summary content for individual inputs.
contrasting
train_13114
The predictiveness of features like ours will be limited for such inputs 4 .
model summaries written for the specific input would give better indication of what information in the input was important and interesting.
contrasting
train_13115
When reference summaries are available, ROUGE provides scores that agree best with human judgements.
when model sum-maries are not available, our features can provide reliable estimates of system quality when averaged over a set of test inputs.
contrasting
train_13116
For instance, in (3) there are several strong cues which suggest that play with fire is used literally.
because the unsupervised classifier only looks at lexical cohesion, it misses many other clues which could help distinguish literal and nonliteral usages.
contrasting
train_13117
Conversely, words with a low sal lit should be more indicative of the non-literal class.
we found that, in practice, the measure is better at picking out indicative words for the literal class; non-literal usages tend to co-occur with a wide range of words.
contrasting
train_13118
As a consequence its confidence function may also not be very accurate.
we know from Sporleder and Li (2009) that the unsupervised classifier has a reasonably good performance.
contrasting
train_13119
We could counteract this by selecting a higher proportion of examples labelled as 'literal'.
given that the number of literal examples in our data set is relatively small, we would soon deplete our literal instance pool and moreover, because we would be forced to add less confidently labelled examples for the literal class, we are likely to introduce more noise in the training set.
contrasting
train_13120
Note that this data set is potentially noisy as not all non-canonical form examples are used literally.
when checking a small sample manually, we found that only very small percentage (<< 1%) was mis-labelled.
contrasting
train_13121
Intuitively, it is plausible that the saliency feature performs quite well as it can also pick up on linguistic indicators of idiom usage that do not have anything to do with lexical cohesion.
a combination of the first three features leads to an even better performance, suggesting that the features do indeed model somewhat different aspects of the data.
contrasting
train_13122
Their experiments show that discourse connectives and the distance between the two text spans have the most impact, and event-based features also contribute to the performance.
their system may not work well for implicit relations alone, as the two most prominent features only apply to explicit relations: implicit relations do not have discourse connectives and the two text spans of an implicit relation are usually adjacent to each other.
contrasting
train_13123
These problems can be ameliorated by imposing limits on rule size or early stopping of EM training, however neither of these techniques addresses the underlying problems.
our model is trained in a single step, i.e., the alignment model is the translation model.
contrasting
train_13124
Generally, two edges can be re-combined if they satisfy the following two constraints: 1) the LHS (left-hand side) nonterminals are identical and the sub-alignments are the same (Zhang et al., 2006); and 2) the boundary words 1 on both sides of the partial translations are equal between the two edges (Chiang, 2007).
as shown in Figure 2, the decoder still generates 801 edges after the hypothesis re-combination.
contrasting
train_13125
Therefore, if we can find a method to greedily reduce the size of each bucket ( , ′), we can reduce the overall expected edge competitions when parsing with ′.
it can be easily proved that the numbers of binary rules in any ′ ∈ ℬ are same, which implies that we cannot reduce the sizes of all buckets at the same timeremoving a rule from one bucket means adding it to another.
contrasting
train_13126
In our experiments, the linear binarization method is just 2 times faster than the CKY-based binarization.
(•) cannot be easily predetermined in a static way as is assumed in Section 3.3 because it depends on ′ and should be updated whenever a rule in is binarized differently.
contrasting
train_13127
A possible reason that the random synchronous binarization method can outperform the baseline method lies in that compared with binarizing SCFG in a fixed way, the random synchronous binarization tends to give a more even distribution of rules among buckets, which alleviates the problem of edge competition.
since the high-frequency source sub-sequences still have high probabilities to be generated in the binarization and lead to the excess competing edges, it just achieves a very small improvement.
contrasting
train_13128
We would expect that when comparing larger fragments, on average there would be more transformations needed to change one into the other than when comparing small fragments.
in the previous scheme, small fragments would have higher scores than large fragments, since fewer differences would be observed.
contrasting
train_13129
We note that in this example, the score of translating "dos" to "make" was higher than the score of translating "dos" to "both".
the higher level target fragment that composed the translation of "dos" together with the translation of "cuestiones" yielded a higher score when composing "both questions" rather than "to make".
contrasting
train_13130
This indeed alleviates the vocabulary coverage problem, especially for the so-called "low density" languages.
these approaches still require bitexts where one side contains the original source language.
contrasting
train_13131
Moreover, our proposed paradigm can, in principle, achieve large-scale acquisition of paraphrases with high semantic similarity.
using parallel training texts in pivoting techniques offers the potential advantage of implicit translational knowledge, in the form of sentence alignments, while our approach is unguided in this respect.
contrasting
train_13132
One advantage of the bitext-dependent pivoting approach is the use of the additional human knowledge that is encapsulated in the parallel sentence alignment.
we argue that the ability to use much larger resources for paraphrasing should trump the human knowledge advantage.
contrasting
train_13133
Pivoting techniques (translating back and forth) rely on limited resources (bitexts), and are subject to shifts in meaning due to their inherent double translation step.
large monolingual resources are relatively easy to collect, and our system involves only a single translation/paraphrasing step per target phrase.
contrasting
train_13134
Our system has the advantage of always producing an NL sentence given any input MR, even if there exist unseen MR productions in the input MR. We can achieve this by simply skipping those unseen MR productions during the generation process.
in order to make a fair comparison against WASP −1 ++, which can only generate NL sentences for 97% of the input MRs, we also do not generate any NL sentence in the case of observing an unseen MR production.
contrasting
train_13135
The GRE models used here do rely on dependency parsing.
they still generalise across formal domains as the relation identification and characterisation systems, developed on news data, achieve comparable performance when applied directly to a relation extraction task in the biomedical domain (see Hachey (2009) for details).
contrasting
train_13136
The simplification reduces the number of free parameters.
low values of n impose an artificially local horizon to the language model, and compromise its ability to capture long-range dependencies, such as syntactic relationships, semantic or thematic constraints.
contrasting
train_13137
The semantic space discussed thus far is based on word co-occurrence statistics.
the statistics of how words are distributed across the documents also carry useful semantic information.
contrasting
train_13138
Due to its generality, LSA has proven a valuable analysis tool with a wide range of applications.
the SVD procedure is somewhat ad-hoc lacking a sound statistical foundation.
contrasting
train_13139
The task of automatically characterizing word meaning in text is typically modeled as word sense disambiguation (WSD): given a list of senses for target lemma w, the task is to pick the best-fitting sense for a given occurrence of w. The list of senses is usually taken from an online dictionary or thesaurus.
clear cut sense boundaries are sometimes hard to define, and the meaning of words depends strongly on the context in which they are used (Cruse, 2000;Hanks, 2000).
contrasting
train_13140
The second way in which word sense and vector space models have been related is to assign disambiguated feature vectors to Word-Net concepts (Pantel, 2005;Patwardhan and Pedersen, 2006).
those works do not use sense-tagged data and are not aimed at WSD, rather the applications are to insert new concepts into an ontology and to measure the relatedness of concepts.
contrasting
train_13141
Previous research has shown the benefit of jointly learning semantic roles of multiple constituents (Toutanova et al., 2008;Koomen et al., 2005).
our joint model makes predictions for a single constituent, but multiple tasks (WSD and SRL) .
contrasting
train_13142
In principle, this problem can be mitigated by training the pipeline model on automatically predicted labels using cross-validation, but in our case we found that automatically predicted WSD labels decreased the performance of the pipeline model even more.
the joint model computes the full probability distribution over the semantic roles and preposition senses.
contrasting
train_13143
Since 90s machine learning based approaches to WSD using sense marked corpora have gained ground (Eneko Agirre & Philip Edmonds, 2007).
the creation of sense marked corpora has always remained a costly proposition.
contrasting
train_13144
One might argue that any word within the synset could serve the purpose of translation.
the exact lexical substitution has to respect native speaker acceptability.
contrasting
train_13145
Despite its importance in basic NLP tasks, the problem has been largely overlooked in NLP research, probably due to it presumed simplicity.
as we have shown, simple methods for MWE identification, such as our baselines, do not perform consistently well across MWEs.
contrasting
train_13146
The advantage of such methods is that association relations are established at the phrase level instead of the lexical level, so they have the potential to resolve the above-mentioned translation problem.
when applying association-based methods, we have to consider the following complications.
contrasting
train_13147
(1996), this two-step approach drastically reduces the search space.
translations of collocated context words in the source word sequence create noisy candidate words, which might cause incorrect extraction of target translations by naive statistical correlation measures, such as the Dice coef-ficient used by Smadja et al.
contrasting
train_13148
Hence, w is a subsequence of f. When counting the word frequency, each word in the target corpus normally contributes a frequency count of one.
since we are interested in the word counts that correlate to w, we adopt the concept of the translation model proposed by Brown et al (1993).
contrasting
train_13149
When the training corpus contains more than 9 months of corpora, the precision of collocations extracted by the baseline method did not increase anymore.
the precision of collocations extracted by our method kept on increasing.
contrasting
train_13150
The recently introduced online confidence-weighted (CW) learning algorithm for binary classification performs well on many binary NLP tasks.
for multi-class problems CW learning updates and inference cannot be computed analytically or solved as convex optimization problems as they are in the binary case.
contrasting
train_13151
We refer to this update as Exact.
exact is expensive to compute, and tends to over-fit in practice (Sec.
contrasting
train_13152
Similarly, our exact implementation converges after an average of 1.25 iterations, much faster than either of the approximations.
this rapid convergence appears to come at the expense of accuracy.
contrasting
train_13153
LambdaSMART is arguably more powerful than LambdaBoost in that it introduces new complex features and thus adjusts not only the parameters but also the structure of the background model 1 .
1 Note that in a sense our proposed LambdaBoost algorithm is the same as LambdaSMART, but using a single feature at each iteration, rather than a tree.
contrasting
train_13154
When a pair occurs more than is expected by chance, the MI score is positive.
if a pair occurs together less than is expected by chance, the mutual information score is negative.
contrasting
train_13155
Fourth, Table 6 shows the association between two query n-grams, "form" and "video", that at first glance may not actually look very informative for URL path selection.
notice that the unigram "form" has a strong preference for pdf documents over more standard web pages with an html extension.
contrasting
train_13156
Without any additionally ranking information, general URLs (root) tend to be ranked more highly than more specific URLs (path), as the root pages tend to be more popular.
our new features express a preference between "va" and "virginia", and this correctly flips the ranking order.
contrasting
train_13157
In addition, NEs tend to be redundant regarding BoW.
if we are able to combine optimally the contributions of the different features, the BoW approach could be improved.
contrasting
train_13158
The results show that according to both the Decision Tree results and the upperbound (MaxPWA), adding new features to tokens improves the classification.
taking nonlinguistic features obtains similar results than taking all features.
contrasting
train_13159
The UIUC dataset has laid a platform for the follow-up research including (Hacioglu and Ward, 2003;Zhang and Lee, 2003;Li and Roth, 2006; Krishnan et al., 2005;Moschitti et al., 2007).
to Li and Roth (2006)'s approach which makes use of a very rich feature set, we propose to use a compact yet effective feature set.
contrasting
train_13160
For example, the best question classifier QC3 outperforms the worst one (QC1) by 1.5%, 2.0%, and 2.0% MRR scores for NE, NE-4 and REG respectively.
it is surprising that the MRR and top5 contribution of NE and NE-4 decreases if QC1 is replaced by QC2, although the top1 score results in performance gain slightly.
contrasting
train_13161
For the second layer, the only assumption we make is that there is at most one link between any two words.
we believe that for any interesting linguistic structure, the second layer will be highly dependent on the structure of the first layer.
contrasting
train_13162
(2006) presented a self-training approach for phrase structure parsing and the approach was shown to be effective in practice.
their approach depends on a high-quality reranker, while we simply augment the features of an existing parser.
contrasting
train_13163
The majority of existing work on text clustering has focused on topic-based clustering, where high accuracies can be achieved even for datasets with a large number of classes (e.g., 20 Newsgroups).
there has been relatively little work on sentiment-based clustering and the related task of unsupervised polarity classification, where the goal is to cluster (or classify) a set of documents (e.g., reviews) according to the polarity (e.g., "thumbs up" or "thumbs down") expressed by the author in an unsupervised manner.
contrasting
train_13164
Turney's (2002) work is perhaps one of the most notable examples of unsupervised polarity classification.
while his system learns the semantic orientation of the phrases in a review in an unsupervised manner, this information is used to predict the polarity of a review heuristically.
contrasting
train_13165
As expected, CRFs can perform reasonably well (accuracy = 63.9%) even without consulting the dictionary, by learning directly from the data.
having the polarity lexicon boosts the performance significantly (accuracy = 70.4%), demonstrating that lexical resources are very helpful for fine-grained sentiment analysis.
contrasting
train_13166
Most of these methods use WordNet.
we propose a simple approach to generate a high-coverage semantic orientation lexicon, which includes both individual words and multi-word expressions, using only a Roget-like thesaurus and a handful of affixes.
contrasting
train_13167
For example irreverent is negative in most contexts, but positive in the sentence below: Millions of fans follow Moulder's irreverent quest for truth.
as we will show through experiments, the exceptions are far outnumbered by those that abide by the predictions of marking theory.
contrasting
train_13168
Thus, the method assigns a semantic orientation to a word-sense combination similar to the SentiWordNet approach and differing from the General Inquirer and Turney-Littman lexicons.
in most natural language tasks, the intended sense of the target word is not explicitly marked.
contrasting
train_13169
Theoretically, a much larger Turney-Littman lexicon can be created even though it may be computationally intensive when working with 100 billion words.
mSOL and TLL are created from different sources of information-mSOL from overtly marked words and a thesaurus, and TLL from co-occurrence information.
contrasting
train_13170
Both have the form where for RLM, , In both cases, f (w) is monotonically decreasing in the frequency of w in the corpus.
there are several differences between the two cases.
contrasting
train_13171
Another difference is that in RLM, P (w) is estimated on reviews with object mentions removed, since the model indicate that P (w) accounts for object-independent review language.
tFIDF + computes Q(w) on full reviews.
contrasting
train_13172
We use evaluations similar to those used before (Rapp, 2002;Pado and Lapata, 2007;Baroni et al., 2008, among others).
whereas most existing studies use only one dataset, or handselected parts thereof, we aim to evaluate measures across four different human datasets.
contrasting
train_13173
Some researchers have discovered that supplementing basic syntactic features with information about adjuncts, co-occurrences, tense, and/or voice of the verb have resulted in better performance.
additional information about semantic SPs of verbs has not yielded considerable improvement on verb classification although SPs can be strong indicators of diathesis alternations (Mc-Carthy, 2001) and although fairly precise semantic descriptions, including information about verb se-1 See section 6 for discussion on previous work.
contrasting
train_13174
Spectral clustering has been shown to be effective for high dimensional and non-convex data in NLP (Chen et al., 2006) and it has been applied to German verb clustering by Brew and Schulte im Walde (2002).
previous work has used Ng et al.
contrasting
train_13175
Inspired by the success of English grapheme-to-phoneme research in speech synthesis, many researchers have proposed phoneme-based English-to-Chinese transliteration models.
such approaches have severely suffered from the errors in Chinese phoneme-to-grapheme conversion.
contrasting
train_13176
Previous approaches using Chinese phonemes have relied only on Chinese phonemes in Chinese phoneme-to-grapheme conversion.
the simple use of Chinese phonemes doesn't always provide a good clue to reduce the ambiguity in Chinese phoneme-to-grapheme conversion.
contrasting
train_13177
Given the parameters {π 0 , π, φ, K} of the HMM, the joint distribution over hidden states s and observations y can be written (with s 0 = 0): As Johnson (2007) clearly explained, training the HMM with EM leads to poor results in PoS tagging.
we can easily treat the HMM in a fully Bayesian way (MacKay, 1997) by introducing priors on the parameters of the HMM.
contrasting
train_13178
For example, under our analysis, the tag 'VBG' has the features [+V, +N, -tense,en], tag 'VBD' [+V, +tense(past), -en], and 'VB' [+V, -tense(finite), -en].
since we do not consider the tense feature to be a structural feature, we do not distinguish 'VBD' from 'VB'; since N(ominal) is a structural feature, 'VBG' remains distinct from both 'VBD' and 'VB'.
contrasting
train_13179
For all words in the Nominal class, except for those with the ending -ly, the only possible tag for each is 'NN', since no finer categories of 'NN' exist in our reduced tagset.
for a word with ending -ly falling into the N class, we simply assume that its tag must be 'RB', although this assumption may have a few exceptions.
contrasting
train_13180
As shown in Table 3, reducing the dictionary by filtering rare words (with count<= d) has not been a promising track to follow for accomplishing the task with as little information as possible.
by introducing a lexicon acquisition step, we achieve a tagging accuracy of 90.6% for the 24K test data with no prior open-class lexicon, provided with only a minimal lexicon of closed-class items (about 0.6% of the full lexicon), as high as the best previous performance of 90.4 given a full lexicon (CRF/CE with d = 1) 12 .
contrasting
train_13181
Building a lexicon based on induced clusters requires our morphological knowledge of three special endings in English: -ing, -ed and -s; on the other hand, to reduce the feature space used for category induction, we utilize vectors of functional features only, exploiting our knowledge of the role of determiners and modal verbs.
the above information is restricted to the lexicon acquisition model.
contrasting
train_13182
The reordering model and the language model are the same in the two experiments.
in forced decoding, we train two translation models, one using training data only while another using both training, dev and test data.
contrasting
train_13183
Ideally, we would like to estimate the parameters of the mapping function so as to directly optimize an automatic MT performance evaluation metric, such as TER or BLEU on the full translation search space.
this is extremely computationally intensive for two reasons: (a) optimizing in the full translation search space requires a new decoding pass for each iteration of optimization; and (b) a direct optimization of TER or BLEU requires the use of a derivative free, slowly converging optimization method such as MERT (Och, 2003), because these objective functions are not differentiable.
contrasting
train_13184
We varied tokenization of development set and test set to match the training data for each experiment.
as we have implied in the previous paragraph, in the one experiment where P (f | e) was used to segment training data, directly incorporating information from target corpus, tokenization for test and development set is not exactly consistent with tokenization of training corpus.
contrasting
train_13185
For both language pairs, this accurately reflects the empirical distribution of token length, as can be seen in Figure 2.
experiments where P (s) was directly optimized performed better, indicating that this parameter should be optimized within the context of a complete system.
contrasting
train_13186
By counting and normalizing appropriately over the entire corpus, we can straightforwardly learn the P sub and P adj distributions.
recall that in our model P ifadj is a rule-specific probability, which makes it more difficult to estimate accurately.
contrasting
train_13187
The number of bucket cells also effects the overall error rate significantly since smaller ranges reduce the probability of a collision.
too few cells per bucket will result in many full buckets when the bucket hash function is not highly IID.
contrasting
train_13188
On the other hand, corpus-based distributional measures of semantic distance, such as cosine and α-skew divergence (Dagan et al., 1999), rely on raw text alone (Weeds et al., 2004;Mohammad, 2008).
when used to rank word pairs in order of semantic distance or correct real-word spelling errors, they have been shown to perform poorly (Weeds et al., 2004;Mohammad and Hirst, 2006).
contrasting
train_13189
Mohammad and Hirst (2006) show that their approach performs better than other strictly corpusbased approaches that they experimented with.
all those experiments were on wordpairs that were listed in the thesaurus.
contrasting
train_13190
contradiction, instead of entailment (de Marneffe et al., 2008).
according to our best knowledge, the detailed comparison between these strategies has not been fully explored, let alone the impact of the linguistic motivation behind the strategy selection.
contrasting
train_13191
And the results show that due to the nature of these approaches based on overlapping information or similarity between T and H, this way of splitting is more reasonable.
rTE systems using semantic role labelers has not shown very promising results, although SrL has been successfully used in many other NLP tasks, e.g.
contrasting
train_13192
For example, the functions that construct the traditional TFIDF cosine similarity can be: where N is the size of the document collection for deriving document frequencies, tf and df are the functions computing the term frequency and document frequency, respectively.
tWEAK also takes a specified vector function f sim but assumes a parametric termweighting function tw w .
contrasting
train_13193
Notice that we choose these functional forms for their simplicity and good empirical performance shown in preliminary experiments.
other smooth functions can certainly be used.
contrasting
train_13194
As shown in Table 1, all three learned termweighting functions lead to better similarity measures compared to the TFIDF scheme in terms of the AUC and MAP scores, where the preference order learning setting performs the best.
for the precision at 3 metric, only the preference learning setting has a higher score than the TFIDF scheme, but the difference is not statistically significant 3 .
contrasting
train_13195
In this scenario, textual ads with bid keywords that match the query can enter the auction and have a chance to be shown on the search result page.
as the advertisers may bid on keywords that are not related to their advertisements, it is important for the system to filter irrelevant ads to ensure that users only receive useful information.
contrasting
train_13196
By pruning the search space, we can speed up the pattern generation process.
none of these modifications affect the accuracy of the proposed semantic similarity measure because the modified version of the prefixspan algorithm still generates the exact set of patterns that we would obtain if we used the original prefixspan algorithm (i.e.
contrasting
train_13197
The RASP parser is based on a manually constructed POS tag-sequence grammar, with a statistical parse selection component and a robust 1 One obvious omission is any form of dependency parser (McDonald et al., 2005;Nivre and Scholz, 2004).
the dependencies returned by these parsers are local, and it would be non-trivial to infer from a series of links whether a long-range dependency had been correctly represented.
contrasting
train_13198
RASP has not been designed to capture many of the dependencies in our corpus; for example, the tagsequence grammar has no explicit representation of verb subcategorisation, and so may not know that there is a missing object in the case of extraction from a relative clause (though it does recover some of these dependencies).
rASP is a popular parser used in a number of applications, and it returns dependencies in a suitable format for evaluation, and so we considered it to be an appropriate and useful member of our parser set.
contrasting
train_13199
The work which deals with the PTB representation directly, such as Johnson (2002), is difficult for us to evaluate because it does not produce explicit dependencies.
the DCU post-processor is ideal because it does produce dependencies in a GR format.
contrasting