source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Figure 1 provides some statistics about this corpus.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
All the texts were annotated by two people.
0
Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
These enable much larger models in memory, compensating for lost accuracy.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Word frequencies are estimated by a re-estimation procedure that involves apply­ ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
The 2nd block contains the IR system, which was tuned by selecting text in multiples of the size of the EMEA training corpus, according to dev set performance.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Statistics for all data sets are shown in Table 2.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This work was supported in part by the National Science Foundation under grant IRI9704240.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This style of naming is never required and seems to be losing currency.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
This limitation is the obstacle to making the technology “open domain”.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Note that hanzi that are not grouped into dictionary words (and are not identified as single­ hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Many human evaluation metrics have been proposed.
A beam search concept is applied as in speech recognition.
0
The search starts in the hypothesis (I; f;g; 0).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The importance of this property becomes clear in contrasting theories underlying GPSG (Gazdar, Klein, Pulluna, and Sag, 1985), and GB (as described by Berwick, 1984) with those underlying LFG and FUG.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The sentence length probability p(JjI) is omitted without any loss in performance.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Giving a recognition algorithm for LCFRL's involves describing the substrings of the input that are spanned by the structures derived by the LCFRS's and how the composition operation combines these substrings.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Manual and Automatic Evaluation of Machine Translation between European Languages
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Note that Wang, Li, and Chang's.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
, December, then the feature MonthName is set to 1.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples.
Two general approaches are presented and two combination techniques are described for each approach.
0
Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
We also performed experiments to evaluate the impact of each type of contextual role knowledge separately.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
One implementation issue deserves some elaboration.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Both BBN and NYU have tagged their own data to supplement the official training data.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
For terrorism, BABAR generated 5,078 resolutions: 2,386 from lexical seeding and 2,692 from syntactic seeding.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
In future work, we plan to follow-up on this approach and investigate other ways that contextual role knowledge can be used.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
11 taTweel (-) is an elongation character used in Arabic script to justify text.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Annotation consistency is important in any supervised learning task.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Second, rather than relying on a division of the corpus into manually-assigned portions, we use features intended to capture the usefulness of each phrase pair.
The corpus was annoted with different linguitic information.
0
The government has to make a decision, and do it quickly.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We introduce several new ideas.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths ≤ 70 (dev set, gold segmentation).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
9 61.0 44.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
5.2 Setup.
Combining multiple highly-accurate independent parsers yields promising results.
0
Our original hope in combining these parsers is that their errors are independently distributed.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Te rro ris m Na tur al Dis ast ers mu rde r of < NP > kill ed <p atie nt > <a ge nt > da ma ged wa s inj ure d in < NP > <a ge nt > rep ort ed <a ge nt > add ed <a ge nt > occ urr ed cau se of < NP > <a ge nt > stat ed <a ge nt > add ed <a ge nt > wr eak ed <a ge nt > cro sse d per pet rat ed <p atie nt > con de mn ed <p atie nt > dri ver of < NP > <a ge nt > car ryi ng Figure 1: Caseframe Network Examples Figure 1 shows examples of caseframes that co-occur in resolutions, both in the terrorism and natural disaster domains.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
We evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the English Gigaword corpus (Parker et al., 2009).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This method, one instance of which we term the "greedy algorithm" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (begin­ ning) of the sentence is reached.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
This is the first time that we organized a large-scale manual evaluation.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Obviously, the presence of a title after a potential name N increases the probability that N is in fact a name.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The sources of our dictionaries are listed in Table 2.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
This fully generative model caters for real interaction between the syntactic and morphological levels as a part of a single coherent process.
Here we present two algorithms.
0
The pseudo-code describing the algorithm is given in Fig.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.
The texts were annotated with the RSTtool.
0
This fact annoyed especially his dog...).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The contextual role knowledge had the greatest impact on pronouns: +13% recall for terrorism and +15% recall for disasters, with a +1% precision gain in terrorism and a small precision drop of -3% in disasters.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
It has been shown for English (Wang and Hirschberg 1992; Hirschberg 1993; Sproat 1994, inter alia) that grammatical part of speech provides useful information for these tasks.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (φ, θ|T , α, β) = n (P (φt|α)P (θt|T , α)) t=1 The transition distribution φt for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Proper-Name Identification.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Proper names that match are resolved with each other.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Time for Moses itself to load, including loading the language model and phrase table, is included.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
58 95.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
In recent years, coreference resolvers have been evaluated as part of MUC6 and MUC7 (MUC7 Proceedings, 1998).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Since the segmentation corresponds to the sequence of words that has the lowest summed unigram cost, the segmenter under discussion here is a zeroth-order model.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
We introduce several new ideas.
This assumption, however, is not inherent to type-based tagging models.
0
Other approaches encode sparsity as a soft constraint.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
27 80.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This revealed interesting clues about the properties of automatic and manual scoring.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
92 76.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
This can be seen as a rough approximation of Yarowsky and Ngai (2001).
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
3.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Since we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part, the instructions were simple: subjects were to mark all places they might plausibly pause if they were reading the text aloud.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In this section we present a partial evaluation of the current system, in three parts.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
To approximate these baselines, we implemented a very simple sentence selection algorithm in which parallel sentence pairs from OUT are ranked by the perplexity of their target half according to the IN language model.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Each knowledge source then assigns a probability estimate to each candidate, which represents its belief that the candidate is the antecedent for the anaphor.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
2.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
This measure has the advantage of being completely automatic.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Each trie node contains a sorted array of entries and they use binary search.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
For the error counts, a range from 0:0 to 1:0 is used.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.