source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Morphological disambiguators that consider a token in context (an utterance) and propose the most likely morphological analysis of an utterance (including segmentation) were presented by Bar-Haim et al. (2005), Adler and Elhadad (2006), Shacham and Wintner (2007), and achieved good results (the best segmentation result so far is around 98%).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
In addition, each feature function is a binary function.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The method being described-henceforth ST..
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The size of TRIE is particularly sensitive to F1092 c11, so vocabulary filtering is quite effective at reducing model size.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
(Thus the domain of the dev and test corpora matches IN.)
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For the seen word ir, 'gen­ erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,).
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.
This corpus has several advantages: it is annotated at different levels.
0
In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The second row represents the performance of the median hyperparameter setting.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.)
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The first modification — cautiousness — is a relatively minor change.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We compare three hash tables: our probing implementation, GCC’s hash set, and Boost’s8 unordered.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Maamouri et al.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This is an issue that we have not addressed at the current stage of our research.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This paper discusses the use of unlabeled examples for the problem of named entity classification.
There are clustering approaches that assign a single POS tag to each word type.
0
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Also, the method of using keywords rules out phrases which don’t contain popular words in the domain.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Our original hope in combining these parsers is that their errors are independently distributed.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The same numbers were used for each data structure.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
3 54.4 33.
It is probably the first analysis of Arabic parsing of this kind.
0
5.2 Discussion.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Modifying the Berkeley parser for Arabic is straightforward.
The AdaBoost algorithm was developed for supervised learning.
0
We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1 <i <n, the following virtual distribution, As before, Ztl is a normalization constant.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For each token , zero, one, or more of the features in each feature group are set to 1.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
(See also Wu and Fung [1994].)
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For a given "word" in the automatic segmentation, if at least k of the hu­ man judges agree that this is a word, then that word is considered to be correct.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Each visited entry wni stores backoff b(wni ).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
While possible to utilize the feature-based log-linear approach described in Berg-Kirkpatrick et al.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
This technique has the advantage of requiring no training, but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomena.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.
There is no global pruning.
0
Can we do . QmS: Yes, wonderful.
Two general approaches are presented and two combination techniques are described for each approach.
0
In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Specifically, for both settings we report results on the median run for each setting.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
During coreference resolution, BABAR checks (1) whether the anaphor is among the lexical expectations for the caseframe that extracts the candidate antecedent, and (2) whether the candidate is among the lexical expectations for the caseframe that extracts the anaphor.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
The latter arcs correspond to OOV words in English.
These clusters are computed using an SVD variant without relying on transitional structure.
0
This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Using this heuristic, BABAR identifies existential definite NPs in the training corpus using our previous learning algorithm (Bean and Riloff, 1999) and resolves all occurrences of the same existential NP with each another.1 2.1.2 Syntactic Seeding BABAR also uses syntactic heuristics to identify anaphors and antecedents that can be easily resolved.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
f, nan2gual+men0 'pumpkins' is by no means impossible.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
However, the point of RandLM is to scale to even larger data, compensating for this loss in quality.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.
Two general approaches are presented and two combination techniques are described for each approach.
0
Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Particles are uninflected.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
1 1 0.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
As noted, this sentence consists of four words, namely B X ri4wen2 'Japanese,' :Â¥, zhanglyu2 'octopus/ :&P:l zen3me0 'how,' and IDt shuol 'say.'
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
First, we describe how the caseframes are represented and learned.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We would like to thank Prof. Ralph Grish- man, Mr. Takaaki Hasegawa and Mr. Yusuke Shinyama for useful comments, discussion and evaluation.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Section 4 presents experimen tal results on two corpora: the MUC4 terrorism corpus, and Reuters texts about natural disasters.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
These systems rely on a training corpus that has been manually annotated with coreference links.
There are clustering approaches that assign a single POS tag to each word type.
0
Feature-based HMM Model (Berg- Kirkpatrick et al., 2010): The KM model uses a variety of orthographic features and employs the EM or LBFGS optimization algorithm; Posterior regulariation model (Grac¸a et al., 2009): The G10 model uses the posterior regular- ization approach to ensure tag sparsity constraint.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The transition from f, to a final state transduces c to the grammatical tag PL with cost cost(unseen(f,)): cost(i¥JJ1l.ir,) == cost(i¥JJ1l.)
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
A final alternate approach would be to combine weighted joint frequencies rather than conditional estimates, ie: cI(s, t) + w,\(s, t)co(, s, t), suitably normalized.5 Such an approach could be simulated by a MAP-style combination in which separate 0(t) values were maintained for each t. This would make the model more powerful, but at the cost of having to learn to downweight OUT separately for each t, which we suspect would require more training data for reliable performance.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
TPT has theoretically better locality because it stores ngrams near their suffixes, thereby placing reads for a single query in the same or adjacent pages.
This paper talks about Unsupervised Models for Named Entity Classification.
0
(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as &quot;vehicle&quot; or &quot;weapon&quot; categories).
Replacing this with a ranked evaluation seems to be more suitable.
0
We received submissions from 14 groups from 11 institutions, as listed in Figure 2.
The texts were annotated with the RSTtool.
0
Since Daneˇs’ proposals of ‘thematic development patterns’, a few suggestions have been made as to the existence of a level of discourse structure that would predict the information structure of sentences within texts.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
In Figure 4 we show an example of variation between the parsing models.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Our second contribution is to apply instance weighting at the level of phrase pairs.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
In such cases we use the non-pruned lattice including all (possibly ungrammatical) segmentation, and let the statistics (including OOV) decide.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
i..f,..
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources.
Here we present two algorithms.
0
.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
If evidence indicates that hypotheses C and D are less likely than hypotheses A and B, then probabilities are redistributed to reflect the fact that {A, B} is more likely to contain the answer than {C, D}.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We call this approach parse hybridization.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy. mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d↑ instead of d↑h (AuxP↑ instead of AuxP↑Sb).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
96 75.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The index in this array is the vocabulary identifier.
They have made use of local and global features to deal with the instances of same token in a document.
0
If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
For developers of Statistical Machine Translation (SMT) systems, an additional complication is the heterogeneous nature of SMT components (word-alignment model, language model, translation model, etc.
All the texts were annotated by two people.
0
For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The result of this is shown in Figure 7.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
However, for multinomial models like our LMs and TMs, there is a one to one correspondence between instances and features, eg the correspondence between a phrase pair (s, t) and its conditional multinomial probability p(s1t).