source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Finally, our U (unparsed) measure is used to report the number of sentences to which our system could not propose a joint analysis.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The key to the methods we describe is redundancy in the unlabeled data.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
If 0(A) gives the number of occurrences of each terminal in the structure named by A, then, given the constraints imposed on the formalism, for each rule A --. fp(Ai, , An) we have the equality where c„ is some constant.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003).
Combining multiple highly-accurate independent parsers yields promising results.
0
If enough parsers suggest that a particular constituent belongs in the parse, we include it.
Combining multiple highly-accurate independent parsers yields promising results.
0
Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Finally, Section 5 explains how BABAR relates to previous work, and Section 6 summarizes our conclusions.
They found replacing it with a ranked evaluation to be more suitable.
0
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics.
Combining multiple highly-accurate independent parsers yields promising results.
0
Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.
The AdaBoost algorithm was developed for supervised learning.
0
Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
This number must be less than or equal to n 􀀀 1.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In various dialects of Mandarin certain phonetic rules apply at the word.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The first four affixes are so-called resultative affixes: they denote some prop­ erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We present two algorithms.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
-1 means that an NP should be ruled out as a possible antecedent, and 0 means that the knowledge source remains neutral (i.e., it has no reason to believe that they cannot be coreferent).
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
As we will see from Table 3, not much improvement is derived from this feature.
A beam search concept is applied as in speech recognition.
0
The final score is obtained from: max e;e0 j2fJ􀀀L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Next, we describe four contextual role knowledge sources that are created from the training examples and the caseframes.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
For the English RST-annotated corpus that is made available via LDC, his corresponding result is 62%.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
A moment's reflection will reveal that things are not quite that simple.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
(2003), which gives 96.8% accuracy on the test set.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
Instead, we want to apply an inverse transformation to recover the underlying (nonprojective) dependency graph.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We would like to relax somewhat the constraint on the path complexity of formalisms in LCFRS.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Whether a language even has orthographic words is largely dependent on the writing system used to represent the language (rather than the language itself); the notion "orthographic word" is not universal.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2
The resulting model is compact, efficiently learnable and linguistically expressive.
0
(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
83 77.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
We model po(s|t) using a MAP criterion over weighted phrase-pair counts: and from the similarity to (5), assuming y = 0, we see that wλ(s, t) can be interpreted as approximating pf(s, t)/po(s, t).
This corpus has several advantages: it is annotated at different levels.
0
Not all the layers have been produced for all the texts yet.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
In order to observe the similarity between these constrained systems, it is crucial to abstract away from the details of the structures and operations used by the system.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Features and context were initially introduced into the models, but they refused to offer any gains in performance.
They found replacing it with a ranked evaluation to be more suitable.
0
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
Combining multiple highly-accurate independent parsers yields promising results.
0
We plan to explore more powerful techniques for exploiting the diversity of parsing methods.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Their default variant implements a forward trie, in which words are looked up in their natural left-to-right order.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Formally, the DempsterShafer theory defines a probability density function m(S), where S is a set of hypotheses.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
In this section, we extend state to optimize left-to-right queries.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Ltd., then organization will be more probable.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
In addition to the tapes required to store the indices, M requires one work tape for splitting the substrings.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
We will evaluate various specific aspects of the segmentation, as well as the overall segmentation per­ formance.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Maamouri et al.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The advantage is that we can recombine search hypotheses by dynamic programming.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
In our situation, the competing hypotheses are the possible antecedents for an anaphor.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations — or groups of relations in particular configurations — are signalled by speakers with prosodic means.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
But in most cases they can be used interchangably.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The reference medicine for Silapo is EPREX/ERYPO, which contains epoetin alfa.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The tool we use is MMAX8, which has been specifically designed for marking co-reference.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
3.1 General Knowledge Sources.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We can show that languages generated by LCFRS's are semilinear as long as the composition operation does not remove any terminal symbols from its arguments.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Replacing this with an ranked evaluation seems to be more suitable.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This method, one instance of which we term the "greedy algorithm" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (begin­ ning) of the sentence is reached.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
(1998) did make use of information from the whole document.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ­ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
• Similarly, when the naïve Bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 0.5 to be accepted, there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesis.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We represent all morphological analyses of a given utterance using a lattice structure.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Among these 32 sets, we found the following pairs of sets which have two or more links.
They have made use of local and global features to deal with the instances of same token in a document.
0
Such a classification can be seen as a not-always-correct summary of global features.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
1 55.8 38.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
We group the features used into feature groups.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For the disasters domain, 8245 texts were used for training and the 40 test documents contained 447 anaphoric links.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.
Two general approaches are presented and two combination techniques are described for each approach.
0
Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The method being described-henceforth ST..
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
RandLM’s stupid backoff variant stores counts instead of probabilities and backoffs.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
This group consists of (3 total number of possible zones) features.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The model described here thus demonstrates great potential for use in widespread applications.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For example Chen and Liu (1992) report precision and recall rates of over 99%, but this counts only the words that occur in the test corpus that also occur in their dictionary.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
One class comprises words derived by productive morphologi­ cal processes, such as plural noun formation using the suffix ir, menD.