id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10000
Since all the methods optimize for the relevance, the proposed approach has comparable relevance to the baselines.
our proposed approach yields a higher information diversity and coverage as compared to the other approaches.
contrasting
train_10001
Standing under the old oak tree, she felt leaves tumbling down her shoulders.
while short, definite expressions signal identifiability and are thus either anaphoric expressions or familiar items, it is much harder to decide which indefinite expressions are bridging anaphors, since indefinite expressions are prototypically used to introduce new discourse referents and principally do not need an antecedent/argument in order to be interpretable.
contrasting
train_10002
The final performance of the adapted system (F-score of 19.5) is also given in Table 4.
to ARRAU, SciCorp is an out-of-domain corpus annotated with referential bridging.
contrasting
train_10003
These syntax-based models are capable of capturing long-range dependencies by modeling the repair phrases on a syntax tree.
their high requirements for training data reduce their practicality, since it is expensive to obtain training data containing both syntax trees and disfluency annotations.
contrasting
train_10004
Traditional output layer of Transformer is only one softmax layer.
either the word sequence or the label sequence is incapable of handling the problem of disfluency detection independently.
contrasting
train_10005
For the word sequence, it guides the model to treat each fluent word and disfluency word equally.
in disfluency detection task, the model just needs to distinguish the non-fluent words from the fluent words in sentence.
contrasting
train_10006
Specifically, the encoder is responsible for composing the high-level latent representation of the input and the decoder endeavors to decompose this representation into its corresponding input sequence.
the AE usually fails to capture any internal structure of the involved sentences under unconstrained condition.
contrasting
train_10007
For the weight sharing constraint based on the shared-latent space assumption, we would expect that the corresponding sentences in the labelled or unlabelled corpus will have the similar latent representations.
the weight sharing constraint alone does not necessarily guarantee the same latent representations.
contrasting
train_10008
The ISO standard (Bunt et al., 2010) can be seen as a generalization of all these annotation schemes.
there is no available training data for the ISO standard.
contrasting
train_10009
This means that once we know that an arrow falls into the direction, motion or indication category we have the interpretation of the arrow.
in order to come up with the exact relation that explains the action arrows, we need to take a further step.
contrasting
train_10010
(Kaufman et al., 2017) transferred the semantics of the selected reference clips to test clips, which keeps consistent and maintains temporal coherence.
the existing methods fail to take advantage of the local constraints which can extract compressed features for generating complementary syntactic elements.
contrasting
train_10011
In the human retina, visual features which represent natural signals formed by the peripheral receptors are very high-dimensional.
we argue that the receptors to discover where and what objects only occupy a small fraction of the space of all possible receptor activation due to the statistical regularity and redundancy (Burton and Moorhead, 1987).
contrasting
train_10012
The knowledge transfer is achieved by learning common semantic representations for different languages.
usually, word-aligned or sentence-aligned parallel data sets are employed for joint learning (Guo et al., 2016;Duong et al., 2016) most existing joint learning approaches focus only on cross-tasks or cross-lingual knowledge transfer.
contrasting
train_10013
(2012) trains an SMT model using conversational in-domain parallel data, and then translates the entire L1 SLU training corpus to L2.
bilingual in-domain data is scarce and costly, making it difficult to deliver both quality and low cost.
contrasting
train_10014
The authors report that the indirect alignment gives the best performance.
they also point out that distant language pairs suffer severely in word alignment.
contrasting
train_10015
Generative dialog models usually adopt beam search as the inference method to generate responses.
small-width beam search only focuses on the limited current optima.
contrasting
train_10016
Chinese word segmentation (CWS) trained from open source corpus faces dramatic performance drop when dealing with domain text, especially for a domain with lots of special terms and diverse writing styles, such as the biomedical domain.
building domain-specific CWS requires extremely high annotation cost.
contrasting
train_10017
These models rank the antecedents (mentions that appear earlier in discourse) and recover the full coreference clusters from local decisions.
unlike coreference relations, sequencing relations are directed.
contrasting
train_10018
Recent advances in event coreference have been promoted by the availability of annotated corpora.
due to the complex nature of events, approaches to event coreference adopt quite different assumptions and definitions.
contrasting
train_10019
For all the featuresf of the gold standard graphz and featuresf of a decoded graphẑ, the feature delta is simply: ∆ =f −f .
a decoded graph may contain links that are not directly presented but inferable from the gold standard graph.
contrasting
train_10020
Building these scripts by hand is labor intensive, so automatically learning script knowledge has attracted attention for decades (Balasubramanian et al., 2013;Chambers & Jurafsky, 2009;Mooney & DeJong, 1985;Pichotta & Mooney, 2014.
we are also interested in what such techniques reveal about broad, quantitative properties of discourse in general.
contrasting
train_10021
Thus, we draw a line after chainsim and other techniques used on the cloze task, referring to them as the candidate score.
a candidate score alone is not enough to generate schemas.
contrasting
train_10022
F1 for these skills can be improved by adding more examples to the k-NN classifier.
this approach does not apply to a question-answering skill.
contrasting
train_10023
When two triples are identical -such as twice labeling "Hillary Clinton as ":instance-of person", or adding multiple ":wiki" links to "Hillary Clinton" -we merge those redundant bits of information.
if a variable has different concepts or relations (e.g.
contrasting
train_10024
This further contributes to the success in numerous natural language processing (NLP) tasks such as language modeling (Cheng et al., 2016;Inan et al., 2017), sentiment analysis (Cheng et al., 2016;Kumar et al., 2016), and reading comprehension (Cheng et al., 2016;Chen et al., 2016;Shen et al., 2017).
effective and efficient grounding of distributional embeddings remains challenging.
contrasting
train_10025
On the other hand, for real-world datasets, taking the max in loss function tends to be very sensitive to label noise, resulting in fake negative samples.
given an image-caption pair (i, c), we employ human heuristics and WordNet knowledge base to generate contrastive negative samples C (c).
contrasting
train_10026
As the constitution of the dataset ensures the frequency in the entire dataset of the words used for replacement, the visual grounding of these frequent nouns is easy to obtain.
the semantics of relations (including prepositions and entity relations) or numbers are not diverse enough in the dataset, leading to the poor performance of VSE against these attacks.
contrasting
train_10027
As shown in Table 3, numeraltyped contrastive adversarial samples improve the counting ability of models.
it is still not clear about where the gain comes from, as the creation of numeral-typed samples may change the form (i.e., MS-COCO Test MS-COCO Test (w/.
contrasting
train_10028
In our experiments, we only used the stance label as attributes of text.
it may also be possible to inherit attributes from the author of the text.
contrasting
train_10029
To illustrate the means of persuasion, an emphasis on pathos could, for example, lead to a preference for gradually increasing the strength of emotional appeal throughout the text.
for a logos-oriented strategy, it may be important that the sequence of units coheres locally and globally (which for a pathos-oriented argument may be less relevant, or even detrimental).
contrasting
train_10030
c4 Proponents of the death penalty count on its deterring effect as well as and the ultimate elimination of any potential threat.
p5 despite the death penalty there are significantly more homicides in the US than in Germany.
contrasting
train_10031
This measure does not take into account the number of language switch points in a sentence (denoted by P (x)) and so the authors modify it further: The code-mixing in the entire corpus can then be quantified by taking an average of the above measure across all sentences in the corpus: where U is the number of sentences in the corpus.
their main assumption that the language which has the maximum number of tokens in a sentence is the Matrix language, may not always hold.
contrasting
train_10032
The community has already addressed this phenomenon by introducing challenges on POS-Tagging, Language Identification, Language Modeling, etc on the code-mixed corpora.
the approaches to development of dialog systems still rely on monolingual conversation datasets.
contrasting
train_10033
They extended Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) to accommodate graph structures to capture hierarchical and temporal conversations.
their model includes designs for efficiently learning the popularity of Reddit.
contrasting
train_10034
The broader view is implemented, inter alia, in the Latvian LVTB treebank and in the English EWT treebank (the latter henceforth abbreviated to UD EN EWT ), where in both cases about 1% of utterances contain a head with an obj and a ccomp dependent; one of the 160 such cases in UD EN EWT is: (1) Apparently the City informed [them] obj [that no permit is required] ccomp .
some treebanks -including the Finnish TDT treebank and the Polish SZ treebank (UD PL SZ , for short) -take the stronger position: if two dependents compete for the status of the direct object, only one of them may be labelled as obj or ccomp, and the other one is given the status of indirect object, iobj.
contrasting
train_10035
As is made clear in the description of the xcomp label, 7 secondary predicates analysed as arguments are xcomp dependents of the verb, with an enhanced nsubj relation to the nominal they predicate of, as in (9).
those analysed as adjuncts are acl dependents of the nominal elements they predicate of, as in (10).
contrasting
train_10036
In sentences such as "We asked the president about the hunt for Osama bin Laden", the prepositional phrase about the hunt for Osama bin Laden will receive the obl -i.e., noncore -dependency label (this is the annotation of such dependents of ASK in UD EN EWT ).
in the sentence "We asked the president whether Americans are safe at home", the subordinate interrogative clause, being an argument, is a core dependent, so it will be annotated as ccomp (again, this is the annotation of such subordinate clause dependents of ASK in UD EN EWT ).
contrasting
train_10037
), otherwise somewhat awkward for the basic proposal presented in the previous subsection: The labels indicated in these examples reflect the constrained understanding of cores: in both cases both conjuncts are oblique, so the whole coordinate dependent is oblique.
on the standard UD view, (34) involves a coordination of a direct object an idea and a finite clause, which -for this reason -must also be analysed as core.
contrasting
train_10038
The symmetry between the three types of dependents -nominal, 'closed' clausal and 'open' clausal -can be made more conspicuous by renaming the labels as in (52).
20 these new names still mix two different types of information: grammatical function (subj, obj and obl) and grammatical category and 'openness' status (n, c, x).
contrasting
train_10039
This task is similar to ours in the sense that a simple binary categorization of items is sought.
there are key differences.
contrasting
train_10040
As do we, those authors focused on noun-noun compounds and they did not address polarity classification.
their approach targets higher frequency words as it relies on the availability of sufficient corpus data to enable the use of distributional similarity.
contrasting
train_10041
Table 4 shows that for most items this seems to be borne out, although the difference is not statistically significant, with the exception of -hai.
for -könig the reverse situation holds in a statistically significant way: the frequencies of affixoid formations are higher than those of the non-affixoid formations.
contrasting
train_10042
In regular compounds, the meaning of the complex form is more or less compositionally derived from the meaning of the components.
since affixoid uses of the morphemes in question go along with bleached meanings, the semantics of a complex word containing an affixoid use should be harder to model compositionally.
contrasting
train_10043
For könig the results drop down as low as the majority baseline in the present setting.
the low result is very likely also heavily driven by the fact that the instances of könig make up close to 40% of our whole dataset.
contrasting
train_10044
In other words, when testing on könig, we use only 60% of the dataset to train on.
we also see lower results on affixoids such as bolzen that are much less frequent than könig and for which we use more training instances in the present generalization setting than we did in the cross-validation setting in §5.1.
contrasting
train_10045
OLLIE (Mausam et al., 2012) follows the idea of bootstrap learning of patterns based on dependency parse paths.
while WOE relies on Wikipedia-based bootstrapping, OLLIE applies a set of high precision seed tuples from its predecessor system REVERB (see section 2.2) to bootstrap a large training set over which it learns a set of extraction pattern templates using dependency parses (see Figure 2).
contrasting
train_10046
In order to preserve such inter-proposition relationships, tuples may contain references to other propositions.
as opposed to OLLIE, where additional contextual modifiers are directly assigned to the corresponding relational tuples, Bast and Haussmann (2013) represent contextual information in the form of separate, linked propositions.
contrasting
train_10047
Finally, the problem of canonicalizing relational phrases and arguments has been hardly addressed so far.
normalizing extractions would be highly beneficial for downstream semantic tasks, such as textual entailment or knowledge base population.
contrasting
train_10048
The results are shown in Figure 5 5 .
to Reimers and Gurevych (2017b), who reported that SGD is the worst optimizer, our results show that SGD outperforms all other optimizers significantly (p < 0.01), with a slower convergence process during training.
contrasting
train_10049
All of these aim to increase dependence between topics and metadata.
our goal is to make topics independent of specified metadata.
contrasting
train_10050
Binary relevance (BR) (Boutell et al., 2004) is one of the earliest attempts to solve the MLC task by transforming the MLC task into multiple single-label classification problems.
it neglects the correlations between labels.
contrasting
train_10051
(2017) use CNN and recurrent neural network (RNN) to capture the semantic information of texts.
they either neglect the correlations between labels or do not consider differences in the contributions of textual content when predicting labels.
contrasting
train_10052
Our proposed methods are developed based on traditional Seq2Seq models.
the mask module is added to the proposed methods, which is used to prevent the models from predicting repeated labels.
contrasting
train_10053
In general, even when a simple linear function generates the training data, these methods are not guaranteed to be consistent with it (Rot99a).
if the feature space is chosen so that they are, the robustness properties shown above become significant.
contrasting
train_10054
For the sake of sireplicity we opted for a linear combination.
simply maximizing correct parses and minimizing incorrect ones would most likely lead to overfitting.
contrasting
train_10055
Instead, a better (but disappointing) strategy would be simply using the tuning corpus.
this is not the situation of LB --plot (d)--for which a moderate, but consistent, improvement of accuracy is observed when retaining the original training set.
contrasting
train_10056
On the one hand, the type of features used in the rules was significantly different between corpora and, additionally, there were very few rules that applied to both sets.
the sign of the prediction of many of these common rules was somewhat contradictory between corpora.
contrasting
train_10057
One obvious strategy is to use a linear combination of separate language and translation components, of the form: where p(w[hi) is a language model, p(wli , s) is a translation model, and A E [0, 1] is a combining weight.
this appears to be a weak technique (Langlais and Foster, 2000), even when A is allowed to depend on various features of the context (hi, s).
contrasting
train_10058
Because different corpora are used to obtain the various results reported in the literature and the problem is often defined differently, detailed comparison is difficult.
the accuracy achieved appears to approach the accuracy results achieved with handwritten rules.
contrasting
train_10059
When a ~ and e are both high, the STL is consuming an unreasonable number of input sentences.
the problem is not intrinsic to the STL model of acquisition.
contrasting
train_10060
For example, concern's conflation set contains itself as well as "concerned," "concerns," and "concerning" (or, in shorthand notation, the set is {a,b,c,d}).
to evaluate an algorithm, we sum the number of correct (C), inserted (Z), and deleted (D) words it predicts for each hypothesized confla- in making these computations, we disregard any CELEX words that are not in the algorithm's data set and vice versa.
contrasting
train_10061
Furthermore, the WSME model is easier to train than the conditional one, because in the WSME model we don't need to estimate the normalization constant Z during the training time.
for each event (x, y) in the training corpus, we have to calculate Z(x) in each iteration of the MEC model.
contrasting
train_10062
The first sum in 3is the expected value of fie ~::#, and it is obviously not possible to sum over all the sentences.
we can estimate the mean by using the empirical expected value: means of the sample mean with respect to P0 (Chen and Rosenfeld, 1999).
contrasting
train_10063
The transformation between T t and T3 should be one of the simplest (accordingly to the operations cost; see section 3.3), and then the search would be limited to the retrieval of a pair (T1,T2) sharing an equivalent transformation.
this is still time-consuming, and we are trying to define a general way to limit the search in such a tree structure space, for example based on tree indexing for efficiency (Daelemans et al., 1997).
contrasting
train_10064
We find that GA feature selection always significantly outperforms the MBLP variant without selection and that feature ordering and weighting with CA significantly outperforms a situation where no weighting is used.
gA selection does not significantly do better than simple iterative feature selection methods, and gA weighting and ordering reach only similar performance as current information-theoretic feature weighting methods.
contrasting
train_10065
crossover: 0.75, mutation: 0.009).
when we used these in the ten-fold cv setup of our main experiments, this gave a mean score of 97.4 (± 0.9) for IBi-IG with CA-selection and a mean score of 97.1 (± 1.1) for IGTREE with GA-selection.
contrasting
train_10066
tion in the number of features used.
we have found no results (on these particular data) that indicate an advantage of evolutionary feature selection approach over the more classical iterative methods.
contrasting
train_10067
Such property seems important for IR purposes, where we might prefer noise rather than silence in the recall process.
it must remain optional, as some other tasks (such as the NP extraction, or the phrase chunking (Abney, 1991)) may need a full disambiguation.
contrasting
train_10068
In recent years, a variety of Machine Learning (ML) techniques has been used to improve the portability of IE systems to new domains, as in SRV (Freitag, 1998), RAPIER (Califf and Mooney, 1997), LIEP (Huffman, 1996), CRYSTAL (Soderland et al., 1995) and WHISK (Soderland, 1999) .
some drawbacks remain in the portability of these systems: a) existing systems generally depend on the supported text style and learn IE-rules either for structured texts, semi-structured texts or free text , b) IE systems are mostly single-concept learning systems, c) consequently, an extractor (e.g., a rule set) is learned for each concept within the scenario in an independent manner, d) the order of execution of the learners is set manually, and so are the scheduling and way of combination of the resulting extractors, and e) focusing on the training data, the size of available training corpora can be inadequate to accurately learn extractors for all the concepts within the scenario 2.
contrasting
train_10069
As argumented in an earlier paper (van Halteren, 2000a), a theory-based feature weight determination would have to take into account each feature's decisiveness and reliability.
clear definitions of these qualities, and hence also means to measure them, are as yet sorely lacking.
contrasting
train_10070
In practice, due to the finite size of the training corpus, the number of rules is always moderate.
as higher values of k lead to an enormous number of possible rules, huge data sets would be necessary in order to have a reliable estimate of the probabilities for values above k = 3.
contrasting
train_10071
Our NP chunks are very similar to the ones of Ramshaw and Marcus (1995 (NP-SBJ *-1) (VP to (VP be (ADJP-PRD excellent)))))) , but ... [CONJP Not only ] does [NP your product ] [vP have to be ] [ADJP excellent ] , but ... ADVP chunks mostly correspond to ADVP constituents in the Treebank.
aDVPs inside aDJPs or inside VPs if in front of the main verb are assimilated into the aDJP respectively VP chunk.
contrasting
train_10072
The initial performance of the tagger was improved by a post-process correction method based on error driven learning and by 4In the literature about related tasks sometimes the tagging accuracy is mentioned as well.
since the relation between tag accuracy and chunk precision and recall is not very strict, tagging accuracy is not a good evaluation measure for this task.
contrasting
train_10073
The evaluation program shows that this simple procedure reaches its best result for 5-contexts (table 1) with 92.46% label accuracy and phrase correctness measured by FZ=i = 87.23.
the improvement from 3-contexts to 5contexts is insignificant, as 3-contexts reached 92.41% accuracy and F~=1=87.09.
contrasting
train_10074
It is 1Precisely, the number of combination becomes 23.
we do not consider I-LST tag since it dose not appear in training data.
contrasting
train_10075
This extra information can be encoded in a state.
one must balance this approach with the fact that as the amount of information in a state increases, with limited training material, the chance of seeing such a state again in the future diminishes.
contrasting
train_10076
The training sets are very small, however, so that the need for feature restrictions disappears and the full model can be used.
the limited size of the training sets has as a disadvantage that hill-climbing becomes practically useless.
contrasting
train_10077
Using such tools has simplified ontology construction.
the wide-spread usage of ontologies is still hindered by the timeconsuming and expensive manual construction task.
contrasting
train_10078
In contrast to some other approaches we include into the groups also some verbs which are in fact infinitive participants of verb valencies.
we are able to detect such cases and recognize the "pure" verb groups afterwards.
contrasting
train_10079
Memory-based learning (MBL) has enjoyed considerable success in corpus-based natural language processing (NLP) tasks and is thus a reliable method of getting a high-level of performance when building corpus-based NLP systems.
there is a bottleneck in MBL whereby any novel testing item has to be compared against all the training items in memory base.
contrasting
train_10080
This is provided to give a more direct comparison with the results in (Daelemans et al., 1999a).
for many NL tasks it is not a good measure and the fscore is more accurate.
contrasting
train_10081
Sure enough both smoothing and cut-off values will allow to improve the precision.
literature has shown that the most frequent sense baseline needs less training data.
contrasting
train_10082
We h a ve presented some preliminary results demonstrating the bene t of using multilingual data.
we conducted our experiments only on a small test set of 32 verbs in one language pair.
contrasting
train_10083
The PP attachment problems we tackle are those involved in analysing such NPs for languages relying on composition of Romance type (as French or Italian), as well as those present in verbal configurations for languages relying on composition of Germanic type (as German or English).
our model can easily be extended to deal with other cases as well.
contrasting
train_10084
The results obtained in previous works relying on semantic classes are above ours (around 0.82 for (Brill and Resnik, 1994) and 0.77 for (Lauer and Dras, 1994)), but a direct comparison is difficult inasmuch as only three-word sequences (V N P, for (Brill and Resnik, 1994) and N N N for (Lauer and Dras, 1994)) were used for evaluation in those works, and the language studied is English.
it may well be the case that the semantic resource we use does not compare well, in terms of coverage and homogeneity, with WordNet, the semantic resource usually used.
contrasting
train_10085
Our results show that the complete model, making use of all the available information yields the best results.
these results are still low, and we still need to precisely identify how to improve them.
contrasting
train_10086
results of Section 2.1) and may learn a same concept in different ways.
9 the combination of classifiers using different oversampling and undersampling rates may be useful since we may not be able to predict, in advance, which rate is optimal (cf.
contrasting
train_10087
For instance, a simple sentence like "Show me the meal" has the gold standard parse: (S (VP (VB Show) (NP (PRP me)) (NP (DT the) (NN meal)))) and is parsed by this algorithm as (ROOT (VVB Show) (PNP me) (NP (AT0 the) (NN1 meal))) According to this evaluation scheme its recall is only 33%, because of the presence of the nonbranching rules, though intuitively it has correctly identified the bracketing.
the crossing brackets measures overvalues these algorithms, since they produces only partial parses -for some sentences my algorithm produces a completely flat parse tree which of course has no crossing brackets.
contrasting
train_10088
It can follow a verb, begin a sentence, end a sentence, and so on.
iN DT is generally followed by some kind of a noun or adjective.
contrasting
train_10089
This quantity was not used in practice because, although it is an excellent indicator of NP, PP, and intransitive S constituents, it gives too strong a bias against other constituents.
neither system is driven exclusively by the entropy measure used, and duplicating the above rankings more accurately did not always lead to better end results.
contrasting
train_10090
The "compact grammar" aspect of MDL is perhaps closer to some traditional linguistic argumentation which at times has argued for minimal grammars on grounds of analytical (Harris, 1951) or cognitive (Chomsky and Halle, 1968) economy.
some CFGs which might possibly be seen as the acquisition goal are anything but compact; take the Penn treebank covering grammar for an extreme example.
contrasting
train_10091
For example, DT NN is an extremely common NP, and when it occurs, it is a constituent around 82% of the time in the data.
when it occurs as a subsequence of DT NN NN it is usually not a constituent.
contrasting
train_10092
As we are considering each pair independently from the rest of the parse, this model does not correspond to a generative model of the kind standardly associated with PCFGs, but can be seen as a random field over the possible parses, with the features being the sequences and contexts (see (Abney, 1997)).
note that we were primarily interested in the clustering behavior, not the parsing behavior, and that the random field parameters have not been fit to any distribution over trees.
contrasting
train_10093
Applied to a system for automatic pronominal anaphora resolution, it led to a substantial improvement in the ratio of suitable and unsuitable candidates in the sets considered by the anaphora resolver (Evans and Orȃsan, 2000).
the previous system has two main weaknesses.
contrasting
train_10094
The classification described in the previous section is useful for determining the animacy of a sense, even for those which were not previously found in the annotated corpus, but which are hyponyms of a node that has been classified.
nouns whose sense is unknown cannot be classified directly and therefore an additional level of processing is necessary.
contrasting
train_10095
This was expected given that our method is based on the idea that in most of the cases the number of animate and inanimate senses determines the animacy of a noun.
this would mean that the same noun will be classified in the same way regardless of the text.
contrasting
train_10096
Several studies rely on the linguistic intuition that the head of the noun phrase makes a greater contribution to the semantics of the nominal group than the modifiers.
for some specific tasks in NLP , the head is not necessarily the most semantically important part of the noun phrase.
contrasting
train_10097
Rules can be of various normal forms and can be ordered.
the appropriate ordering can be hard to find and the key point of many rule induction algorithms is to minimize the search strategy through the space of possible rule sets and orderings.
contrasting
train_10098
Hence, we have defined a learning setting that is both positive examples only and unsupervised.
there has been some suggestion that negative evidence may be available in the form of parental correction.
contrasting
train_10099
The most effective learning model appears to have been a combination of symbolic and stochastic techniques, like the approach presented here.
a full lexicon is supplied to the learner, so that the problem is reduced to one of disambiguating between the possible supertags.
contrasting