source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
The increase is generally higher for PDT than for DDT, which indicates a greater diversity in non-projective constructions.
|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
| 0 |
Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Unfortunately, modifying the model to account for these kind of dependencies is not at all straightforward.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
1.
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
In our particular case the majority requires the agreement of only two parsers because we have only three.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
Table 4 shows translation results for the three approaches.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
We have not yet tried this.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
The system of Berg-Kirkpatrick et al.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
For example, both a chair and a truck would be labeled as artifacts, but this does not at all suggest that they are coreferent.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
The breakdown of the different types of words found by ST in the test corpus is given in Table 3.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
These systems are similar to those described by Pollard (1984) as Generalized Context-Free Grammars (GCFG's).
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
This paper does not necessarily reflect the position of the U.S. Government.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
This group of features attempts to capture such information.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
Although embedding this version of LCFRS's in the framework of ILFP developed by Rounds (1985) is straightforward, our motivation was to capture properties shared by a family of grammatical systems and generalize them defining a class of related formalisms.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
We applied the AutoSlog system (Riloff, 1996) to our unannotated training texts to generate a set of extraction patterns for each domain.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
However, they list two sets, one consisting of 28 fragments and the other of 22 fragments, in which they had 0% recall and precision.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ work in course projects, and to some extent of paid assistentships.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
The breakdown of the different types of words found by ST in the test corpus is given in Table 3.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
(3)), with one term for each classifier.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
â).
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
F1 85 Berkeley 80 Stanford.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
Using this heuristic, BABAR identifies existential definite NPs in the training corpus using our previous learning algorithm (Bean and Riloff, 1999) and resolves all occurrences of the same existential NP with each another.1 2.1.2 Syntactic Seeding BABAR also uses syntactic heuristics to identify anaphors and antecedents that can be easily resolved.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
The similarities become apparent when they are studied at the level of derivation structures: derivation nee sets of CFG's, HG's, TAG's, and MCTAG's are all local sets.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
Indeed there are several open issues.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
This design does not guarantee âstructural zeros,â but biases towards sparsity.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
encodes the one tag per word constraint and is uni form over type-level tag assignments.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
A generative model was applied (similar to naive Bayes) with the three labels as hidden vanables on unlabeled examples, and observed variables on (seed) labeled examples.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The fraction of buckets that are empty is m−1 m , so average lookup time is O( m 1) and, crucially, constant in the number of entries.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Hyperparameter settings are sorted according to the median one-to-one metric over runs.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
There has been additional recent work on inducing lexicons or other knowledge sources from large corpora.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
If two systems’ scores are close, this may simply be a random effect in the test data.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
Figure 3 Figure 1.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Evaluation Metrics We report three metrics to evaluate tagging performance.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
Put another way, the minimum of Equ.
|
The corpus was annoted with different linguitic information.
| 0 |
Besides information structure, the second main goal is to enhance current models of rhetorical structure.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
4.5 Transliterations of Foreign Words.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
0 X u} "' o; .2 X X><X X XX X X X X X X x X X X X X x X V X X X X .;t'*- XXX:OX X X X X X X 9 x X X XX XX X X X X X X X XXX:< X X>O<XX>!KXX XI<>< »C X X XX :X: X X "' X X XX >OO<X>D<XIK X X X X X X --XX»: XXX X X»C X X«X...C:XXX X Xll< X X ><XX>IIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R"2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
The corpus-based statistical parsing community has many fast and accurate automated parsing systems, including systems produced by Collins (1997), Charniak (1997) and Ratnaparkhi (1997).
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â 650,000 â 790,000 MENE â â 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.
|
There is no global pruning.
| 0 |
In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
We tagged each noun with the top-level semantic classes assigned to it in Word- Net.
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
Naseem et al. (2009) and Snyder et al.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
1 is given in Fig.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
We adopted the MUC6 guidelines for evaluating coreference relationships based on transitivity in anaphoric chains.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Such analyzers propose multiple segmentation possibilities and their corresponding analyses for a token in isolation but have no means to determine the most likely ones.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
Previous approaches have tried to find examples that are similar to the target domain.
|
Here we present two algorithms.
| 0 |
The distribution specifies the relative weight, or importance, of each example — typically, the weak learner will attempt to minimize the weighted error on the training set, where the distribution specifies the weights.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
In our second model GTvpi we also distinguished finite and non-finite verbs and VPs as 10Lattice parsing can be performed by special initialization of the chart in a CKY parser (Chappelier et al., 1999).
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
Text generation, or at least the two phases of text planning and sentence planning, is a process driven partly by well-motivated choices (e.g., use this lexeme X rather than that more colloquial near-synonym Y ) and partly by con tation like that of PCC can be exploited to look for correlations in particular between syntactic structure, choice of referring expressions, and sentence-internal information structure.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
We propose an unsupervised method to discover paraphrases from a large untagged corpus, without requiring any seed phrase or other cue.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
In Eq.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
Because of this threshold, very few NE instance pairs could be used and hence the variety of phrases was also limited.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
47 78.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
able at http://nlp.stanford.edu/projects/arabic.shtml.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
In addition to the named-entity string (Maury Cooper or Georgia), a contextual predictor was also extracted.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003).
|
A beam search concept is applied as in speech recognition.
| 0 |
What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô1 iô2) max bI 1 I Yi=1 [p(bijbiô1; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô1 iô2) p(bijbiô1; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô1 iô2) is the trigram language model probability.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The breakdown of the different types of words found by ST in the test corpus is given in Table 3.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Wu and Fung introduce an evaluation method they call nk-blind.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
The evaluation framework for the shared task is similar to the one used in last year’s shared task.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
1 2 3.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
Step 4.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
This is appropriate in cases where it is sanctioned by Bayes’ law, such as multiplying LM and TM probabilities, but for adaptation a more suitable framework is often a mixture model in which each event may be generated from some domain.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
Annotators argued for the importance of having correct and even multiple references.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
Taking only the highest frequency rules is much "safer", as they tend to be very accurate.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.