source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The sentence length probability p(JjI) is omitted without any loss in performance.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
This overview is illustrated in Figure 1.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We will refer to the semantic classes that co-occur with a caseframe as the semantic expectations of the caseframe.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The linear LM (lin lm), TM (lin tm) and MAP TM (map tm) used with non-adapted counterparts perform in all cases slightly worse than the log-linear combination, which adapts both LM and TM components.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
If the expression is longer or complicated (like “A buys B” and “A’s purchase of B”), it is called “paraphrase”, i.e. a set of phrases which express the same thing or event.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
1 2 3.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Adam Pauls provided a pre-release comparison to BerkeleyLM and an initial Java interface.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
This paper conducted research in the area of automatic paraphrase discovery.
0
The accuracy of the sets in representing paraphrase ranged from 73% to 99%, depending on the NE categories and set sizes; the accuracy of the links for two evaluated domains was 73% and 86%.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The monotone search performs worst in terms of both error rates mWER and SSER.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
If a phrase does not contain any keywords, the phrase is discarded.
It is probably the first analysis of Arabic parsing of this kind.
0
Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Table 4 Differences in performance between our system and Wang, Li, and Chang (1992).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The manual scores are averages over the raw unnormalized scores.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
5.2 Setup.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Thus at each iteration the method induces at most n x k rules, where k is the number of possible labels (k = 3 in the experiments in this paper). step 3.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Daum´e (2007) applies a related idea in a simpler way, by splitting features into general and domain-specific versions.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008).
Two general approaches are presented and two combination techniques are described for each approach.
0
This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Besides information structure, the second main goal is to enhance current models of rhetorical structure.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
To resolve the anaphor, we survey the final belief values assigned to each candidate’s singleton set.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
So, this was a surprise element due to practical reasons, not malice.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Cohen and Smith approach this by introducing the α hyperparameter, which performs best when optimized independently for each sentence (cf.
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).
There are clustering approaches that assign a single POS tag to each word type.
0
For all languages we do not make use of a tagging dictionary.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information — as in PCC.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We did have a problem classifying some modified noun phrases where the modified phrase does not represent a qualified or restricted form of the head, like “chairman” and “vice chairman”, as these are both represented by the keyword “chairman”.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
This may seem surprising, given the experiments reported in section 4, but the explanation is probably that the non-projective dependencies that can be recovered at all are of the simple kind that only requires a single lift, where the encoding of path information is often redundant.
The AdaBoost algorithm was developed for supervised learning.
0
Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In addition to the automatic methods, AG, GR, and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no productively derived words or names).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Examples are given in Table 4.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For example, BABAR learned that agents that “assassinate” or “investigate a cause” are usually humans or groups (i.e., organizations).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
76 16.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
In this section, we describe how contextual role knowledge is represented and learned.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
i..f,..
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The result of this is shown in Figure 7.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.
A beam search concept is applied as in speech recognition.
0
For Æ = 1, a new target language word is generated using the trigram language model p(eje0; e00).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Roughly speaking, a language, L, has the property of semilinearity if the number of occurrences of each symbol in any string is a linear combination of the occurrences of these symbols in some fixed finite set of strings.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
3.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We use a simple TF/IDF method to measure the topicality of words.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Others depend upon various lexical heuris­ tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Each decision determines the inclusion or exclusion of a candidate constituent.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
All notions of word, with the exception of the orthographic word, are as relevant in Chinese as they are in English, and just as is the case in other languages, a word in Chinese may correspond to one or more symbols in the orthog 1 For a related approach to the problem of word-segrnention in Japanese, see Nagata (1994), inter alia..
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
In our model, we associate these features at the type-level in the lexicon.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The index in this array is the vocabulary identifier.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie ware es denn am ahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about ahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten konnten , ware das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .
These clusters are computed using an SVD variant without relying on transitional structure.
0
4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
3An English sentence with ambiguous PoS assignment can be trivially represented as a lattice similar to our own, where every pair of consecutive nodes correspond to a word, and every possible PoS assignment for this word is a connecting arc.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
In a model we built with default settings, 1.2% of n + 1-grams were missing their ngram suffix.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Both (Tsarfaty, 2006; Cohen and Smith, 2007) have shown that a single integrated framework outperforms a completely streamlined implementation, yet neither has shown a single generative model which handles both tasks.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
3 68.9 50.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We thank members of the MIT NLP group for their suggestions and comments.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The quasi-monotone search performs best in terms of both error rates mWER and SSER.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Simple Type-Level Unsupervised POS Tagging
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The fraction of buckets that are empty is m−1 m , so average lookup time is O( m 1) and, crucially, constant in the number of entries.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
It is generally considered to be fast (Pauls 29 − 1 probabilities and 2' − 2 non-zero backoffs. and Klein, 2011), with a default implementation based on hash tables within each trie node.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Every token is independent of the others, and the sentence lattice is in fact a concatenation of smaller lattices, one for each token.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Like in the co-reference annotation, G¨otze’s proposal has been applied by two annotators to the core corpus but it has not been systematically evaluated yet.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.
Here both parametric and non-parametric models are explored.
0
For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure.
Replacing this with a ranked evaluation seems to be more suitable.
0
0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt
The corpus was annoted with different linguitic information.
0
Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Only 2 link in the CC- domain (buy-purchase, acquire-acquisition) and 2 links (trader-dealer and head-chief) in the PC- domain are found in the same synset of Word- Net 2.1 (http://wordnet.princeton.edu/).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
First, from a large corpus, we extract all the NE instance pairs.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
splitPUNC restores the convention of the WSJ.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available.