source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Compared to last year’s shared task, the participants represent more long-term research efforts.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Email: rlls@bell-labs.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
For all languages, the vocabulary sizes increase by several thousand words.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Email: rlls@bell-labs.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Each out-of-domain phrase pair is characterized by a set of simple features intended to reflect how useful it will be.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply “Arabic”) because of the unusual opportunity it presents for comparison to English parsing results.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Following the system devised under the Qing emperor Kang Xi, hanzi have traditionally been classified according to a set of approximately 200 semantic radicals; members of a radical class share a particular structural component, and often also share a common meaning (hence the term 'semantic').
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
In order to pass these constraints onto the parser, the lexical rules in the grammar are of the form pi —* (si, pi) Parameter Estimation The grammar probabilities are estimated from the corpus using simple relative frequency estimates.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
27 80.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.
This paper conducted research in the area of automatic paraphrase discovery.
0
Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For example, BABAR learned that agents that “assassinate” or “investigate a cause” are usually humans or groups (i.e., organizations).
They have made use of local and global features to deal with the instances of same token in a document.
0
In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
In addition to the optimizations specific to each datastructure described in Section 2, we implement several general optimizations for language modeling.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.'
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
If the parse contains productions from outside our grammar the machine has no direct method for handling them (e.g. the resulting database query may be syntactically malformed).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
IRSTLM 5.60.02 (Federico et al., 2008) is a sorted trie implementation designed for lower memory consumption.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
This group contains a large number of features (one for each token string present in the training data).
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
In Section 6 we dis­ cuss other issues relating to how higher-order language models could be incorporated into the model.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
In Input: {(x1,i, Initialize: Vi, j : e(xi) = 0.
Combining multiple highly-accurate independent parsers yields promising results.
0
The estimation of the probabilities in the model is carried out as shown in Equation 4.
Their results show that their high performance NER use less training data than other systems.
0
Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We outlined the definition of a family of constrained grammatical formalisms, called Linear Context-Free Rewriting Systems.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Morphologically derived words such as, xue2shengl+men0.
This paper conducted research in the area of automatic paraphrase discovery.
0
We then gather all phrases with the same keyword.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Future work along these lines will incorporate other layers of annotation, in particular the syntax information.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Queries detect the invalid probability, using the node only if it leads to a longer match.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
32 81.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Here, we process only full-form words within the translation procedure.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
More recently, (Riloff and Jones 99) describe a method they term "mutual bootstrapping" for simultaneously constructing a lexicon and contextual extraction patterns.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Unbounded dependencies between branches are not possible in such a system.
Here we present two algorithms.
0
More recently, (Riloff and Jones 99) describe a method they term "mutual bootstrapping" for simultaneously constructing a lexicon and contextual extraction patterns.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The following context-free production captures the derivation step of the grammar shown in Figure 7, in which the trees in the auxiliary tree set are adjoined into themselves at the root node (address c).
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Figure 2 shows examples of lexical expectations that were learned for both domains.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Words and punctuation that appear in brackets are considered optional.
These clusters are computed using an SVD variant without relying on transitional structure.
0
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Specifically, for both settings we report results on the median run for each setting.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Table 3 shows BABAR’s performance when the four contextual role knowledge sources are added.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
For these models we limit the options provided for OOV words by not considering the entire token as a valid segmentation in case at least some prefix segmentation exists.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
We model po(s|t) using a MAP criterion over weighted phrase-pair counts: and from the similarity to (5), assuming y = 0, we see that wλ(s, t) can be interpreted as approximating pf(s, t)/po(s, t).
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The out-of-domain test set differs from the Europarl data in various ways.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Training under this model involves estimation of parameter values for P(y), P(m) and P(x I y).
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
If enough parsers suggest that a particular constituent belongs in the parse, we include it.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
hanzi in the various name positions, derived from a million names.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.
The AdaBoost algorithm was developed for supervised learning.
0
In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz.
They have made use of local and global features to deal with the instances of same token in a document.
0
MENE has only been tested on MUC7.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
This is a unique object for which we are able to define a proper probability model.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Contextual role knowledge provides evidence as to whether a candidate is a plausible antecedent for an anaphor.
These clusters are computed using an SVD variant without relying on transitional structure.
0
We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-.
All the texts were annotated by two people.
0
Section 4 draws some conclusions from the present state of the effort.
There is no global pruning.
0
A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
We include a constituent in our hypothesized parse if it appears in the output of a majority of the parsers.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
30 16.
This paper talks about Unsupervised Models for Named Entity Classification.
0
More recently, (Riloff and Jones 99) describe a method they term "mutual bootstrapping" for simultaneously constructing a lexicon and contextual extraction patterns.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We therefore also normalized judgements on a per-sentence basis.
This paper conducted research in the area of automatic paraphrase discovery.
0
In figure 4, reverse relations are indicated by `*’ next to the frequency.
They focused on phrases which two Named Entities, and proceed in two stages.
0
3.3 Evaluation Results.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure.
There is no global pruning.
0
The Verbmobil task is an appointment scheduling task.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
These models generally outperform our memory consumption but are much slower, even when cached.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Annotators argued for the importance of having correct and even multiple references.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We introduce several new ideas.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate.
There are clustering approaches that assign a single POS tag to each word type.
0
0 70.9 42.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We adopted the MUC6 guidelines for evaluating coreference relationships based on transitivity in anaphoric chains.
Combining multiple highly-accurate independent parsers yields promising results.
0
For example, one parser could be more accurate at predicting noun phrases than the other parsers.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
For brevity we omit the segments from the analysis, and so analysis of the form “fmnh” as f/REL mnh/VB is represented simply as REL VB.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
In contrast to results reported in Johnson (2007), we found that the per P (Ti|T −i, β) n (f,v)∈Wi P (v|Ti, f, W −i, T −i, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.