source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Hence, their relationship to formalisms such as HG's and TAG's is of interest. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | An input ABCD can be represented as an FSA as shown in Figure 2(b). |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | However, we do not explore this possibility in the current work. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | so that 'door' would be and in this case the hanzi 7C, does not represent a syllable. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The segmentation chosen is the best path through the WFST, shown in (d). |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Step 4. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | A final alternate approach would be to combine weighted joint frequencies rather than conditional estimates, ie: cI(s, t) + w,\(s, t)co(, s, t), suitably normalized.5 Such an approach could be simulated by a MAP-style combination in which separate 0(t) values were maintained for each t. This would make the model more powerful, but at the cost of having to learn to downweight OUT separately for each t, which we suspect would require more training data for reliable performance. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB. |
The corpus was annoted with different linguitic information. | 0 | Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.' |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | For example, as Gan (1994) has noted, one can construct examples where the segmen tation is locally ambiguous but can be determined on the basis of sentential or even discourse context. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The key to the methods we describe is redundancy in the unlabeled data. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Fig. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | (b) F.i'JJI! |
It is probably the first analysis of Arabic parsing of this kind. | 0 | VBD she added VP PUNC â SBAR IN NP 0 NN. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Each vertex within a connected component must have the same label — in the binary classification case, we need a single labeled example to identify which component should get which label. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For humans, this characteristic can impede the acquisition of literacy. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The evaluation framework for the shared task is similar to the one used in last year’s shared task. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The standard measures for evaluating Penn Treebank parsing performance are precision and recall of the predicted constituents. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 64 76. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Two general approaches are presented and two combination techniques are described for each approach. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | For the automatic evaluation, we used BLEU, since it is the most established metric in the field. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | (2009). |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | The effect of a second reference resolution classifier is not entirely the same as that of global features. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | If they knew that the first four words in a hypergraph node would never extend to the left and form a 5-gram, then three or even fewer words could be kept in the backward state. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The theory has also been validated empirically. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuterâs text collection8 that had a subject code corresponding to natural disasters. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (2) was extended to have an additional, innermost loop over the (3) possible labels. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | To quantize, we use the binning method (Federico and Bertoldi, 2006) that sorts values, divides into equally sized bins, and averages within each bin. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | These models generally outperform our memory consumption but are much slower, even when cached. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The learned patterns are then normalized and applied to the corpus. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In this section we present a partial evaluation of the current system, in three parts. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | (1992). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | What both of these approaches presume is that there is a sin gle correct segmentation for a sentence, against which an automatic algorithm can be compared. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | 4.1 Corpora. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Semantic expectations are analogous to lexical expectations except that they represent semantic classes rather than nouns. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Timing is based on plentiful memory. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Each set is assigned two values: belief and plausibility. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Note that it is in precision that our over all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | Applying the function PROJECTIVIZE to the graph in Figure 1 yields the graph in Figure 2, where the problematic arc pointing to Z has been lifted from the original head jedna to the ancestor je. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | The similar explanation applies to the link to the âstakeâ set. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | This leads to a linear combination of domain-specific probabilities, with weights in [0, 1], normalized to sum to 1. |
The corpus was annoted with different linguitic information. | 0 | The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | , Sun day, then the feature DayOfTheWeek is set to 1. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | KS Function Ge nde r filters candidate if gender doesnât agree. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | We further thank Khalil Simaan (ILLCUvA) for his careful advise concerning the formal details of the proposal. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | to explore how well we can induce POS tags using only the one-tag-per-word constraint. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 0 55.3 34. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | In total, across all domains, we kept 13,976 phrases with keywords. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Each parse is converted into a set of constituents represented as a tuples: (label, start, end). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 4.5 Transliterations of Foreign Words. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Feature weights were set using Och’s MERT algorithm (Och, 2003). |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | F-measure is the harmonic mean of precision and recall, 2PR/(P + R). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Many packages perform language model queries. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.' |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | We plan to explore more powerful techniques for exploiting the diversity of parsing methods. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Covering the first uncovered position in the source sentence, we use the language model probability p(ej$; $). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Obviously âLotusâ is part of the following clause rather than being the object of âestimatesâ and the extracted instance makes no sense. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | So, it is too costly to make IE technology âopen- domainâ or âon-demandâ like IR or QA. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | However, lazy mapping is generally slow because queries against uncached pages must wait for the disk. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example). |
Their results show that their high performance NER use less training data than other systems. | 0 | By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The segmenter will give both analyses :1 cai2 neng2 'just be able,' and ?]cai2neng2 'talent,' but the latter analysis is preferred since splitting these two morphemes is generally more costly than grouping them. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Before presenting our results, we describe the datasets that we used, as well as two baselines. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The proof is given in (Tillmann, 2000). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | This limits the number of NE category pairs to 2,000 and the number of NE pair instances to 0.63 million. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | na me =>2 ha nzi fa mi ly 2 ha nzi gi ve n 5. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | IRSTLM 5.60.02 (Federico et al., 2008) is a sorted trie implementation designed for lower memory consumption. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 3.1 Lexicon Component. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Since every CFL is known to be semilinear (Parikh, 1966), in order to show semilinearity of some language, we need only show the existence of a letter equivalent CFL Our definition of LCFRS's insists that the composition operations are linear and nonerasing. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | level. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Unlike Germann et al. (2009), we chose a model size so that all benchmarks fit comfortably in main memory. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The cost estimate, cost(i¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of i¥JJ1l. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | However, TRIE partitions storage by n-gram length, so walking the trie reads N disjoint pages. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | There are two possible reasons: (1) the knowledge sources are resolving different cases of anaphora, and (2) the knowledge sources provide multiple pieces of evidence in support of (or against) a candidate, thereby acting synergistically to push the DempsterShafer model over the belief threshold in favor of a single candidate. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The compact variant uses sorted arrays instead of hash tables within each node, saving some memory, but still stores full 64-bit pointers. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel’ˇcuk, 1988; Covington, 1990). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.