source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
DempsterShafer handles this by re-normalizing all the belief values with respect to only the non-null sets (this is the purpose of the denominator in Equation 1).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We present a coreference resolver called BABAR that uses contextual role knowledge to evaluate possible antecedents for an anaphor.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The PCC is not the result of a funded project.
These clusters are computed using an SVD variant without relying on transitional structure.
0
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
REL+VB) (cf.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Our experimental setup therefore is designed to serve two goals.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
For the experiments in this paper we made a couple of additional modifications to the CoBoost algorithm.
They have made use of local and global features to deal with the instances of same token in a document.
0
Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999).
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.
They have made use of local and global features to deal with the instances of same token in a document.
0
of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder – 650,000 – 790,000 MENE – – 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
BABAR uses a named entity recognizer to identify proper names that refer to people and companies.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
In this particular case, all English vertices are labeled as nouns by the supervised tagger.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Evaluation Metrics We report three metrics to evaluate tagging performance.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Among these 32 sets, we found the following pairs of sets which have two or more links.
These clusters are computed using an SVD variant without relying on transitional structure.
0
1 1 0.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Annotation consistency is important in any supervised learning task.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
All annotations are done with specific tools and in XML; each layer has its own DTD.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
2.3 Assigning Evidence Values.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy.
It is probably the first analysis of Arabic parsing of this kind.
0
We weight edges using a unigram language model estimated with Good- Turing smoothing.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
3.1 Word ReOrdering with Verbgroup.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
There are two possible reasons: (1) the knowledge sources are resolving different cases of anaphora, and (2) the knowledge sources provide multiple pieces of evidence in support of (or against) a candidate, thereby acting synergistically to push the DempsterShafer model over the belief threshold in favor of a single candidate.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
83 77.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
pronunciations of individual words; they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances.
Two general approaches are presented and two combination techniques are described for each approach.
0
Our original hope in combining these parsers is that their errors are independently distributed.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
There are two key benefits of this model architecture.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
In future work, we plan to follow-up on this approach and investigate other ways that contextual role knowledge can be used.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
So far the graph has been completely unlabeled.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(Specifically, the limit n starts at 5 and increases by 5 at each iteration.)
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
:zhong1 : 0.0 tjl :huo2 :0.0 (R:spub:/ic of Ch:ina) + .,_,...I : jlong4 :0.0 (mUifaty genG181) 0 £: _NC: 40.0 Figure 3 Partial Chinese Lexicon (NC = noun; NP = proper noun).c=- - I •=- :il: .;ss:;zhangt • '-:.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
On the one hand, the definition of composition in Steedman (1985), which technically permits composition of functions with unbounded number of arguments, generates tree sets with dependent paths such as those shown in Figure 6.
A beam search concept is applied as in speech recognition.
0
diesem 3.
Here we present two algorithms.
0
(7), such as the likelihood function used in maximum-entropy problems and other generalized additive models (Lafferty 99).
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
These tag distributions are used to initialize the label distributions over the English vertices in the graph.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.
All the texts were annotated by two people.
0
As an indication, in our core corpus, we found an average sentence length of 15.8 words and 1.8 verbs per sentence, whereas a randomly taken sample of ten commentaries from the national papers Su¨ddeutsche Zeitung and Frankfurter Allgemeine has 19.6 words and 2.1 verbs per sentence.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7).
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
This fully generative model caters for real interaction between the syntactic and morphological levels as a part of a single coherent process.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Simply assigning to each word its most frequent associated tag in a corpus achieves 94.6% accuracy on the WSJ portion of the Penn Treebank.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
In previous work (Bean and Riloff, 1999), we developed an unsupervised learning algorithm that automatically recognizes definite NPs that are existential without syntactic modification because their meaning is universally understood.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Hence, we use the bootstrap resampling method described by Koehn (2004).
Here we present two algorithms.
0
We present two algorithms.
All the texts were annotated by two people.
0
information structure.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Chang of Tsinghua University, Taiwan, R.O.C., for kindly providing us with the name corpora.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
The features we used can be divided into 2 classes: local and global.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first “President George Bush” then “Bush”).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
(2010) consistently outperforms ours on English, we obtain substantial gains across other languages.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The last affix in the list is the nominal plural f, men0.20 In the table are the (typical) classes of words to which the affix attaches, the number found in the test corpus by the method, the number correct (with a precision measure), and the number missed (with a recall measure).
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
We plan to explore more powerful techniques for exploiting the diversity of parsing methods.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
2.1 Inverted Alignments.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The evaluation framework for the shared task is similar to the one used in last year’s shared task.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Also, in Information Extraction (IE), in which the system tries to extract elements of some events (e.g. date and company names of a corporate merger event), several event instances from different news articles have to be aligned even if these are expressed differently.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
To initialize the graph we tag the English side of the parallel text using a supervised model.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Others depend upon various lexical heuris­ tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
For robustness reasons, the parser may output a set of dependency trees instead of a single tree. most dependent of the next input token, dependency type features are limited to tokens on the stack.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Using this heuristic, BABAR identifies existential definite NPs in the training corpus using our previous learning algorithm (Bean and Riloff, 1999) and resolves all occurrences of the same existential NP with each another.1 2.1.2 Syntactic Seeding BABAR also uses syntactic heuristics to identify anaphors and antecedents that can be easily resolved.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
And if one is interested in TIS, one would probably consider the single orthographic word ACL to consist of three phonological words-lei s'i d/-corresponding to the pronunciation of each of the letters in the acronym.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This process produces a large set of caseframes coupled with a list of the noun phrases that they extracted.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
If they knew that the first four words in a hypergraph node would never extend to the left and form a 5-gram, then three or even fewer words could be kept in the backward state.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
So, it is too costly to make IE technology “open- domain” or “on-demand” like IR or QA.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
We are currently exploring such algorithms.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
As suggested in Section 4.3.2, a derivation with independent paths can be divided into subcomputations with limited sharing of information.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Non-empty buckets contain an entry belonging to them or to a preceding bucket where a conflict occurred.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The word joining is done on the basis of a likelihood criterion.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
It was then tested on section 22 of the Treebank in conjunction with the other parsers.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We use double-circles to indicate the space-delimited token boundaries.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4.
This paper talks about Pseudo-Projective Dependency Parsing.
0
However, this argument is only plausible if the formal framework allows non-projective dependency structures, i.e. structures where a head and its dependents may correspond to a discontinuous constituent.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Named Entity Recognition: A Maximum Entropy Approach Using Global Information
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Several extensions of AdaBoost for multiclass problems have been suggested (Freund and Schapire 97; Schapire and Singer 98).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.'
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
D o m ai n Li n k ac cu ra cy W N c o v e r a g e C C 7 3 . 3 % 2 / 1 1 P C 8 8 . 9 % 2 / 8 Table 2.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This suggests a direct parallel to (1): where ˜p(s, t) is a joint empirical distribution extracted from the IN dev set using the standard procedure.2 An alternative form of linear combination is a maximum a posteriori (MAP) combination (Bacchiani et al., 2004).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Better Arabic Parsing: Baselines, Evaluations, and Analysis
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
We call this approach parse hybridization.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Our approach uses word-to-word dependencies between source and target words.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
• Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol – the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.