source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
07 80.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
It was also proposed to allow annotators to skip sentences that they are unable to judge.
This paper talks about Unsupervised Models for Named Entity Classification.
0
To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.
This paper talks about Pseudo-Projective Dependency Parsing.
0
In section 2 we introduce the graph transformation techniques used to projectivize and deprojectivize dependency graphs, and in section 3 we describe the data-driven dependency parser that is the core of our system.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
We implement two data structures: PROBING, designed for speed, and TRIE, optimized for memory.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Step 4.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
This assumption, however, is not inherent to type-based tagging models.
A beam search concept is applied as in speech recognition.
0
Can we do . QmS: Yes, wonderful.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
32 81.
Here we present two algorithms.
0
Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
There any many techniques for improving language model speed and reducing memory consumption.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The belief value that would have been assigned to the intersection of these sets is .60*.70=.42, but this belief has nowhere to go because the null set is not permissible in the model.7 So this probability mass (.42) has to be redistributed.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The natural baseline approach is to concatenate data from IN and OUT.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The improvement is due to the cost of bit-level reads and avoiding reads that may fall in different virtual memory pages.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
First, we use a novel graph-based framework for projecting syntactic information across language boundaries.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Eight out of the thirteen errors in the high frequency phrases in the CC-domain are the phrases in “agree”.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We can obtain a letter equivalent CFL defined by a CFG in which the for each rule as above, we have the production A —* A1 Anup where tk (up) = cp.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
For even larger models, storing counts (Talbot and Osborne, 2007; Pauls and Klein, 2011; Guthrie and Hepple, 2010) is a possibility.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
92 77.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .
This paper talks about Unsupervised Models for Named Entity Classification.
0
(Specifically, the limit n starts at 5 and increases by 5 at each iteration.)
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ.
This paper conducted research in the area of automatic paraphrase discovery.
0
Keyword detection error Even if a keyword consists of a single word, there are words which are not desirable as keywords for a domain.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
We used these three parsers to explore parser combination techniques.
There are clustering approaches that assign a single POS tag to each word type.
0
During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) ∝ P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, ψ, θ, φ, w|α, β)dψdθdφ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The increase is generally higher for PDT than for DDT, which indicates a greater diversity in non-projective constructions.
These clusters are computed using an SVD variant without relying on transitional structure.
0
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
This assumption, however, is not inherent to type-based tagging models.
0
3 54.4 33.
They have made use of local and global features to deal with the instances of same token in a document.
0
The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Not all the layers have been produced for all the texts yet.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
It is. based on the traditional character set rather than the simplified character set used in Singapore and Mainland China.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
Combining multiple highly-accurate independent parsers yields promising results.
0
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
0 Figure 5 An example of affixation: the plural affix.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Variants of alif are inconsistently used in Arabic texts.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
(Thus the domain of the dev and test corpora matches IN.)
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
In this section, we will explain the algorithm step by step with examples.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (‘Sonderforschungsbereich’) revolving around 9 2.7 Information structure.
It is probably the first analysis of Arabic parsing of this kind.
0
67 95.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
1
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Each knowledge source then assigns a probability estimate to each candidate, which represents its belief that the candidate is the antecedent for the anaphor.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
This is not ideal for some applications, however.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
(1) CEO of McCann . . .
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
A generative model was applied (similar to naive Bayes) with the three labels as hidden vanables on unlabeled examples, and observed variables on (seed) labeled examples.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
First, it directly encodes linguistic intuitions about POS tag assignments: the model structure reflects the one-tag-per-word property, and a type- level tag prior captures the skew on tag assignments (e.g., there are fewer unique determiners than unique nouns).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Part-of-speech (POS) tag distributions are known to exhibit sparsity — a word is likely to take a single predominant tag in a corpus.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Its correct antecedent is “a revolver”, which is extracted by the caseframe “killed with <NP>”.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
In our grammar, features are realized as annotations to basic category labels.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
In addition, as the CRF and PCFG look at similar sorts of information from within two inherently different models, they are far from independent and optimizing their product is meaningless.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(student+plural) 'students,' which is derived by the affixation of the plural affix f, menD to the nounxue2shengl.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
BABAR uses a named entity recognizer to identify proper names that refer to people and companies.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
In this case, Maury Cooper is extracted.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(8) can now be rewritten5 as which is of the same form as the function Zt used in AdaBoost.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.
It is probably the first analysis of Arabic parsing of this kind.
0
The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
They focused on phrases which two Named Entities, and proceed in two stages.
0
The second stage links sets which involve the same pairs of individual NEs.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(For some recent corpus-based work on Chinese abbreviations, see Huang, Ahrens, and Chen [1993].)
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
(Bar-Haim et al., 2007; Habash and Rambow, 2005)) and probabilities are assigned to different analyses in accordance with the likelihood of their tags (e.g., “fmnh is 30% likely to be tagged NN and 70% likely to be tagged REL+VB”).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
This paper, however, provides a comprehensive overview of the data collection effort and its current state.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The learned information was recycled back into the resolver to improve its performance.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Ideally we’d like to know the thematic role of each extracted noun phrase, but AutoSlog does not generate thematic roles.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We evaluated the results based on two metrics.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Table 1 shows results of the benchmark.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall.
Here both parametric and non-parametric models are explored.
0
Let s = a + b.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The correct ambiguity resolution of the syntactic level therefore helps to resolve the morphological one, and vice versa.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
As is standard, we use a fixed constant K for the number of tagging states.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
If there are too many distinct states, the decoder prunes low-scoring partial hypotheses, possibly leading to a search error.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) ∝ P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, ψ, θ, φ, w|α, β)dψdθdφ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
'Malaysia.'
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
A typical situation is shown in Figure 1.