source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The same numbers were used for each data structure. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | This result suggests that all of contextual role KSs can provide useful information for resolving anaphora. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 31 75. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | There has been recent interest in the application of Indexed Grammars (IG's) to natural languages. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | An important aspect of the DempsterShafer model is that it operates on sets of hypotheses. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010). |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | (b) does the translation have the same meaning, including connotations? |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC). |
The texts were annotated with the RSTtool. | 0 | We will briefly discuss this point in Section 3.1. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (Ï, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|Ï, θ) = P (T , W , θ, Ï, Ï, t, w|α, β) = P (T , W , Ï|β) [Lexicon]  n n ï£ (w,t)â(w,t) j  P (tj |Ïtjâ1 )P (wj |tj , θtj ) P (Ï, θ|T , α, β) [Parameter] P (w, t|Ï, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | More examples are shown in Figure 5. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Our initial experimentation with the evaluation tool showed that this is often too overwhelming. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Note that it is in precision that our over all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise. |
All the texts were annotated by two people. | 0 | ⢠Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol â the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | For each domain, phrases which contain the same keyword are gathered to build a set of phrases (Step 3). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | 0 X u} "' o; .2 X X><X X XX X X X X X X x X X X X X x X V X X X X .;t'*- XXX:OX X X X X X X 9 x X X XX XX X X X X X X X XXX:< X X>O<XX>!KXX XI<>< »C X X XX :X: X X "' X X XX >OO<X>D<XIK X X X X X X --XX»: XXX X X»C X X«X...C:XXX X Xll< X X ><XX>IIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R"2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | (7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | 7). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 4 70.4 46. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The Hebrew token ‘bcl’1, for example, stands for the complete prepositional phrase 'We adopt here the transliteration of (Sima’an et al., 2001). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | each word in the lexicon whether or not each string is actually an instance of the word in question. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Different sentence structure and rich target language morphology are two reasons for this. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Matsoukas et al (2009) generalize it by learning weights on sentence pairs that are used when estimating relative-frequency phrase-pair probabilities. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The present proposal falls into the last group. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Entries landing in the same bucket are said to collide. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Their results are then compared with the results of an automatic segmenter. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010). |
There is no global pruning. | 0 | E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | SRILM’s compact variant has an incredibly expensive destructor, dwarfing the time it takes to perform translation, and so we also modified Moses to avoiding the destructor by calling exit instead of returning normally. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 0 57.3 51. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Experiments are presented in section 4. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | We follow the guidelines developed in the TIGER project (Brants et al. 2002) for syntactic annotation of German newspaper text, using the Annotate3 tool for interactive construction of tree structures. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Alon Lavie advised on this work. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In the remainder of the paper, we outline how a class of Linear Context-Free Rewriting Systems (LCFRS's) may be defined and sketch how semilinearity and polynomial recognition of these systems follows. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | (2009) on Portuguese (Grac¸a et al. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 0 . 8 3 1 0.859 496 76. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | The larger sets are more accurate than the small sets. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Data We use the Hebrew Treebank, (Sima’an et al., 2001), provided by the knowledge center for processing Hebrew, in which sentences from the daily newspaper “Ha’aretz” are morphologically segmented and syntactically annotated. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | This aspect of the formalism is both linguistically and computationally important. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Table 1 briefly describes the seven syntactic heuristics used by BABAR to resolve noun phrases. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The relevant variables are the set of token-level tags that appear before and after each instance of the ith word type; we denote these context pairs with the set {(tb, ta)} and they are contained in t(âi). |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER). |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | To do this, the x's and y's are stored in the next 2ni + 2n2 tapes, and M goes to a universal state. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | These operations, as we see below, are restricted to be size preserving (as in the case of concatenation in CFG) which implies that they will be linear and non-erasing. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Papers that use this method or minor variants thereof include Liang (1986), Li et al. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching framework. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The search starts in the hypothesis (I; f;g; 0). |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Ex: Mr. Bush disclosed the policy by reading it... |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | N, portion of examples on which both classifiers give a label rather than abstaining), and the proportion of these examples on which the two classifiers agree. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Current state-of-the-art broad-coverage parsers assume a direct correspondence between the lexical items ingrained in the proposed syntactic analyses (the yields of syntactic parse-trees) and the spacedelimited tokens (henceforth, ‘tokens’) that constitute the unanalyzed surface forms (utterances). |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | We train and test on the CoNLL-X training set. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | The annotator can then âclick awayâ those words that are here not used as connectives (such as the conjunction und (âandâ) used in lists, or many adverbials that are ambiguous between connective and discourse particle). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | For example, the passive voice pattern â<subject> were kidnappedâ and the active voice pattern âkidnapped <direct object>â are merged into a single normalized pattern âkidnapped <patient>â.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, â<agent> kidnappedâ or âkidnapped <patient>â), and (2) predicate-argument relations associated with both verbs and nouns (e.g., âkidnapped for <np>â or âvehicle with <np>â). |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | The first setting uses the European Medicines Agency (EMEA) corpus (Tiedemann, 2009) as IN, and the Europarl (EP) corpus (www.statmt.org/europarl) as OUT, for English/French translation in both directions. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | However, lazy mapping is generally slow because queries against uncached pages must wait for the disk. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | RandLM is the clear winner in RAM utilization, but is also slower and lower quality. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | First, a non-anaphoric NP classifier identifies definite noun phrases that are existential, using both syntactic rules and our learned existential NP recognizer (Bean and Riloff, 1999), and removes them from the resolution process. |
All the texts were annotated by two people. | 0 | The state in future has not enough work for its many teachers. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 0 57.3 51. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | The sentences in the corpus were tagged by a transformation-based chunker and an NE tagger. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | a classifier. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.