source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | (2010) consistently outperforms ours on English, we obtain substantial gains across other languages. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 0 57.2 43. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The extent to which this constraint is enforced varies greatly across existing methods. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 18 77. |
Their results show that their high performance NER use less training data than other systems. | 0 | For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | For alif with hamza, normalization can be seen as another level of devocalization. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The segmentation chosen is the best path through the WFST, shown in (d). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The model described here thus demonstrates great potential for use in widespread applications. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | That is, we can use the discourse parser on PCC texts, emulating for instance a âco-reference oracleâ that adds the information from our co-reference annotations. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Mikheev et al. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | And time is short. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | (Other classes handled by the current system are discussed in Section 5.) |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The judgements tend to be done more in form of a ranking of the different systems. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered? |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | BABAR merely identifies caseframes that frequently co-occur in coreference resolutions. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | An example of a fairly low-level relation is the affix relation, which holds between a stem morpheme and an affix morpheme, such as f1 -menD (PL). |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 58 95. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The effect of the pruning threshold t0 is shown in Table 5. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | For example, CFG's cannot produce trees of the form shown in Figure 1 in which there are nested dependencies between S and NP nodes appearing on the spine of the tree. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | The 8 similarity-to-IN features are based on word frequencies and scores from various models trained on the IN corpus: To avoid numerical problems, each feature was normalized by subtracting its mean and dividing by its standard deviation. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | 10. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy. mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d↑ instead of d↑h (AuxP↑ instead of AuxP↑Sb). |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse? |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The correct resolution in sentence (c) depends on knowledge that kidnappers frequently blindfold their victims. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | A secondary reference resolution classifier has information on the class assigned by the primary classifier. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | BABAR uses information extraction patterns to identify contextual roles and creates four contextual role knowledge sources using unsupervised learning. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | This combination generalizes (2) and (3): we use either at = a to obtain a fixed-weight linear combination, or at = cI(t)/(cI(t) + 0) to obtain a MAP combination. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Generalizing state minimization, the model could also provide explicit bounds on probability for both backward and forward extension. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | "c' 0 + 0 "0 ' ⢠+ a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y ⢠Taiwan 0 ·;; 0 c CD E i5 0"' 9 9 ⢠Mainland ⢠⢠⢠⢠-0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | This PP modifies another NP, whose head is a singular noun. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | This supports our main thesis that decisions taken by single, improved, grammar are beneficial for both tasks. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Equ. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We extend Subramanya et al.’s intuitions to our bilingual setup. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Almost all annotators expressed their preference to move to a ranking-based evaluation in the future. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993). |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Given that weights on all outgoing arcs sum up to one, weights induce a probability distribution on the lattice paths. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | An example of a fairly low-level relation is the affix relation, which holds between a stem morpheme and an affix morpheme, such as f1 -menD (PL). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Modifying the Berkeley parser for Arabic is straightforward. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For parsing, this is a mistake, especially in the case of interrogatives. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The semantic caseframe expectations are used in two ways. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | However, for multinomial models like our LMs and TMs, there is a one to one correspondence between instances and features, eg the correspondence between a phrase pair (s, t) and its conditional multinomial probability p(s1t). |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | 3.3 Evaluation Results. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | These systems rely on a training corpus that has been manually annotated with coreference links. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | We settled on contrastive evaluations of 5 system outputs for a single test sentence. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 7 Conclusion and Future Work. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Unlike the WSJ corpus which has a high frequency of rules like VP âVB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects). |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Vocabulary lookup is a hash table mapping from word to vocabulary index. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Call the crossing constituents A and B. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Annotators argued for the importance of having correct and even multiple references. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | We report results for the best and median hyperparameter settings obtained in this way. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | With each iteration more examples are assigned labels by both classifiers, while a high level of agreement (> 94%) is maintained between them. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The treebank has two versions, v1.0 and v2.0, containing 5001 and 6501 sentences respectively. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | In addition to a heuristic based on decision list learning, we also presented a boosting-like framework that builds on ideas from (Blum and Mitchell 98). |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | For simplicity, we assume that OUT is homogeneous. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | A moment's reflection will reveal that things are not quite that simple. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | This work was supported in part by the National Science Foundation under grant IRI9704240. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | To approximate these baselines, we implemented a very simple sentence selection algorithm in which parallel sentence pairs from OUT are ranked by the perplexity of their target half according to the IN language model. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing. |
Their results show that their high performance NER use less training data than other systems. | 0 | Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Compared to last year’s shared task, the participants represent more long-term research efforts. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï). |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | This intuition is born out by the experimental results. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | We further thank Khalil Simaan (ILLCUvA) for his careful advise concerning the formal details of the proposal. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 37 79. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Figure 3 Figure 1. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | The larger sets are more accurate than the small sets. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | (2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | The size of TRIE is particularly sensitive to F1092 c11, so vocabulary filtering is quite effective at reducing model size. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | A generative model was applied (similar to naive Bayes) with the three labels as hidden vanables on unlabeled examples, and observed variables on (seed) labeled examples. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The inverted alignment probability p(bijbiô1; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | For the purposes of EM, the "observed" data is {(xi, Ya• • • (xrn, Yrn), xfil, and the hidden data is {ym+i y}. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles. |
Their results show that their high performance NER use less training data than other systems. | 0 | For example, if is found in the list of person first names, the feature PersonFirstName is set to 1. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | wo rd => na m e 2. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.' |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | can expect famous names like Zhou Enlai's to be in many dictionaries, but names such as :fi lf;f; shi2jil-lin2, the name of the second author of this paper, will not be found in any dictionary. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.