source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Once we figure out the important word (e.g. keyword), we believe we can capture the meaning of the phrase by the keyword.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The best analysis of the corpus is taken to be the true analysis, the frequencies are re-estimated, and the algorithm is repeated until it converges.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
If (wi, r, wj) E A, we say that wi is the head of wj and wj a dependent of wi.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.'
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
When two partial hypotheses have equal state (including that of other features), they can be recombined and thereafter efficiently handled as a single packed hypothesis.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase.
This assumption, however, is not inherent to type-based tagging models.
0
5 70.1 58.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank.
This paper talks about Unsupervised Models for Named Entity Classification.
0
But we will show that the use of unlabeled data can drastically reduce the need for supervision.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
For example, BABAR learned that agents that “assassinate” or “investigate a cause” are usually humans or groups (i.e., organizations).
Replacing this with a ranked evaluation seems to be more suitable.
0
So, who won the competition?
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Automatic Paraphrase Discovery based on Context and Keywords between NE Pairs
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
(2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
However, for our purposes it is not sufficient to repre­ sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third Tai­ wan speaker is also close in the most significant dimension (the x axis).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.
The texts were annotated with the RSTtool.
0
Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Table 2 shows our complete set of results.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
If one of these checks fails then this knowledge source reports that the candidate is not a viable antecedent for the anaphor.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Cluster phrases based on Links We now have a set of phrases which share a keyword.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
For example, in Figure 3, we can see that the phrases in the “buy”, “acquire” and “purchase” sets are mostly paraphrases.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
English was again paired with German, French, and Spanish.
The corpus was annoted with different linguitic information.
0
(Carlson, Marcu 2001) responded to this situation with relatively precise (and therefore long!)
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
), which precludes a single universal approach to adaptation.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Among the IS-units, the referring expressions are marked as such and will in the second phase receive a label for cognitive status (active, accessible- text, accessible-situation, inferrable, inactive).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
3.1 Corpora.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
diesem 3.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
For the ‘core’ portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright — but see Sections 3.2 and 3.3 below.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
In this section for the purposes of showing that polynomial time recognition is possible, we make the additional restriction that the contribution of a derived structure to the input string can be specified by a bounded sequence of substrings of the input.
These clusters are computed using an SVD variant without relying on transitional structure.
0
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Hence, the terminal symbols appearing in the structures that are composed are not lost (though a constant number of new symbols may be introduced).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We are very grateful to Tony Kroc.h, Michael Pails, Sunil Shende, and Mark Steedman for valuable discussions. formalisms.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.
This paper talks about Unsupervised Models for Named Entity Classification.
0
In principle a feature could be an arbitrary predicate of the (spelling, context) pair; for reasons that will become clear, features are limited to querying either the spelling or context alone.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Most of these groups follow a phrase-based statistical approach to machine translation.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Mikheev et al.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Sima’an et al. (2001) presented parsing results for a DOP tree-gram model using a small data set (500 sentences) and semiautomatic morphological disambiguation.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Figure 4 shows the seven general knowledge sources (KSs) that represent features commonly used for coreference resolution.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks.
It is probably the first analysis of Arabic parsing of this kind.
0
Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _..
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
, Sun day, then the feature DayOfTheWeek is set to 1.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Instead of the names of elementary trees of a TAG, the nodes are labeled by a sequence of names of trees in an elementary tree set.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA.
They focused on phrases which two Named Entities, and proceed in two stages.
0
However, there are phrases which express the same meanings even though they do not share the same keyword.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
(2009).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The Berkeley parser gives state-of-the-art performance for all metrics.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Trees are composed using an operation called adjoining, which is defined as follows.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The role that each noun phrase plays in the kidnapping event is key to distinguishing these cases.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
This result suggests the benefit of using the automatic discovery method.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Also, “agree” in the CC-domain is not a desirable keyword.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The relevant variables are the set of token-level tags that appear before and after each instance of the ith word type; we denote these context pairs with the set {(tb, ta)} and they are contained in t(−i).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
In this way we restrict the parameterization of a Language Original case English Danish Dutch German Spanish Swedish Portuguese 94.6 96.3 96.6 95.5 95.4 93.3 95.6 Table 1: Upper bound on tagging accuracy assuming each word type is assigned to majority POS tag.
The AdaBoost algorithm was developed for supervised learning.
0
Each learner is free to pick the labels for these instances.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
T e r r o r i s m Ca sef ra me Semantic Classes <a ge nt > ass ass ina ted group, human inv esti gat ion int o < N P> event exp lod ed out sid e < N P> building N a t u r a l D i s a s t e r s Ca sef ra me Semantic Classes <a ge nt > inv esti gat ing cau se group, human sur viv or of < N P> event, natphenom hit wit h < N P> attribute, natphenom Figure 3: Semantic Caseframe Expectations Figure 2: Lexical Caseframe Expectations To illustrate how lexical expectations are used, suppose we want to determine whether noun phrase X is the antecedent for noun phrase Y. If they are coreferent, then X and Y should be substitutable for one another in the story.4 Consider these sentences: (S1) Fred was killed by a masked man with a revolver.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 — — — 0 . 8 0 9 0.839 335 0 . 7 9
They have made use of local and global features to deal with the instances of same token in a document.
0
This paper presents a maximum entropy-based named entity recognizer (NER).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
There are two weaknesses in Chang et al.'s model, which we improve upon.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Although the best published results for the Collins parser is 80% UAS (Collins, 1999), this parser reaches 82% when trained on the entire training data set, and an adapted version of Charniak’s parser (Charniak, 2000) performs at 84% (Jan Hajiˇc, pers. comm.).
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
We asked six native speakers-three from Taiwan (TlT3), and three from the Mainland (M1M3)-to segment the corpus.
They found replacing it with a ranked evaluation to be more suitable.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
There are still some open issues to be resolved with the format, but it represents a first step.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The belief value that would have been assigned to the intersection of these sets is .60*.70=.42, but this belief has nowhere to go because the null set is not permissible in the model.7 So this probability mass (.42) has to be redistributed.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
In this work we extended the AdaBoost.MH (Schapire and Singer 98) algorithm to the cotraining case.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
The probabilities are incorporated into the DempsterShafer model using Equation 1.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
gaolbu4-gaolxing4 (hap-not-happy) 'happy?'
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Evaluation of Morphological Analysis.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
As expected, the vanilla HMM trained with EM performs the worst.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ­ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
The AdaBoost algorithm was developed for supervised learning.
0
(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
There has also been some work on adapting the word alignment model prior to phrase extraction (Civera and Juan, 2007; Wu et al., 2005), and on dynamically choosing a dev set (Xu et al., 2007).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
These are written to the state s(wn1) and returned so that they can be used for the following query.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.