source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
It is probably the first analysis of Arabic parsing of this kind. | 0 | of Arabic. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | This group consists of (3 total number of possible zones) features. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 72 77. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The first row represents the average accuracy of the three parsers we combine. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Proper names that match are resolved with each other. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | We again adopt an approach where we alternate between two classifiers: one classifier is modified while the other remains fixed. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Yet we note that the better grammars without pruning outperform the poorer grammars using this technique, indicating that the syntactic context aids, to some extent, the disambiguation of unknown tokens. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The natural baseline approach is to concatenate data from IN and OUT. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | JA DE G O L D G R AS S SI C K NE SS DE AT H R A T 14. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation. |
The corpus was annoted with different linguitic information. | 0 | Nonetheless, the prospect of a network of annotated discourse resources seems particularly promising if not only a single annotation layer is used but a whole variety of them, so that a systematic search for correlations between them becomes possible, which in turn can lead to more explanatory models of discourse structure. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We therefore also normalized judgements on a per-sentence basis. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We characterize a class of formalisms that have this property in Section 4. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline. |
All the texts were annotated by two people. | 0 | For illustration, an English translation of one of the commentaries is given in Figure 1. |
All the texts were annotated by two people. | 0 | Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | The token-level term is similar to the standard HMM sampling equations found in Johnson (2007). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | For example, as Gan (1994) has noted, one can construct examples where the segmen tation is locally ambiguous but can be determined on the basis of sentential or even discourse context. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Subsets C of increasing cardinality c are processed. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Arguably this consists of about three phonological words. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Information from a sentence is sometimes insufficient to classify a name correctly. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | During development, we sensed that the Recency and Syn- role KSs did not deserve to be on equal footing with the other KSs because their knowledge was so general. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | TIS systems in general need to do more than simply compute the. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The confidence intervals are computed by bootstrap resampling for BLEU, and by standard significance testing for the manual scores, as described earlier in the paper. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Lack of correct reference translations was pointed out as a short-coming of our evaluation. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | We found that contextual role knowledge was more beneficial for pronouns than for definite noun phrases. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | We excluded these from the evaluation as they can be easily identified with a list of days/months. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In this section, we brie y review our translation approach. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | This is an issue that we have not addressed at the current stage of our research. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Reicheâs colleagues will make sure that the concept is waterproof. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The same form fmnh can be segmented as f-mnh, f (“that”) functioning as a reletivizer with the form mnh. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | (a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Feature weights were set using Och’s MERT algorithm (Och, 2003). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | However, the characterization given in the main body of the text is correct sufficiently often to be useful. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | A Stochastic Finite-State Word-Segmentation Algorithm for Chinese |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010). |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Therefore we opted to use the much 7 The DempsterShafer theory assumes that one of the hypotheses in θ is correct, so eliminating all of the hypotheses violates this assumption. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | The 14 general-language features embody straightforward cues: frequency, “centrality” as reflected in model scores, and lack of burstiness. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | For example, we can easily imagine that the number of paraphrases for âA buys Bâ is enormous and it is not possible to create a comprehensive inventory by hand. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Indeed there are several open issues. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 35 76. |
Here we present two algorithms. | 0 | Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Table 1 The cost as a novel given name (second position) for hanzi from various radical classes. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 0 Figure 5 An example of affixation: the plural affix. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Presence of the determiner J Al. 2. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Similar behavior is observed when adding features. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal Chi nese personal name, retains a foreign flavor because of liM. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005). |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Finally, we find links between sets of phrases, based on the NE instance pair data (for example, different phrases which link âIBMâ and âLotusâ) (Step 4). |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | We will briefly discuss this point in Section 3.1. |
This assumption, however, is not inherent to type-based tagging models. | 0 | There are two key benefits of this model architecture. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Kollege. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | In addition to a heuristic based on decision list learning, we also presented a boosting-like framework that builds on ideas from (Blum and Mitchell 98). |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | (S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | (2010) and the posterior regular- ization HMM of Grac¸a et al. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Unfortunately, we have much less data to work with than with the automatic scores. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | One is the accuracy within a set of phrases which share the same keyword; the other is the accuracy of links. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Note that it is in precision that our over all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | (1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | 10) and trained both EM and L-BFGS for 1000 iterations. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | It has been shown for English (Wang and Hirschberg 1992; Hirschberg 1993; Sproat 1994, inter alia) that grammatical part of speech provides useful information for these tasks. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Fort= 1,...,T: |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be de noted unseen(f,). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | However, their system is a hybrid of hand-coded rules and machine learning methods. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | For all languages we do not make use of a tagging dictionary. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | As we will see from Table 3, not much improvement is derived from this feature. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Up to now, most IE researchers have been creating paraphrase knowledge (or IE patterns) by hand and for specific tasks. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Figure 2: An ATB sample from the human evaluation. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Figure 3 shows examples of semantic expectations that were learned. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The effect of the pruning threshold t0 is shown in Table 5. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | Our model outperforms theirs on four out of five languages on the best hyperparameter setting and three out of five on the median setting, yielding an average absolute difference across languages of 12.9% and 3.9% for best and median settings respectively compared to their best EM or LBFGS performance. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Quantization can be improved by jointly encoding probability and backoff. |
There is no global pruning. | 0 | f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ôL; ; Jg. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | There are 13 types of extensions needed to describe the verbgroup reordering. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Time starts when Moses is launched and therefore includes model loading time. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Step 3. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | The accuracy is calculated as the ratio of the number of paraphrases to the total number of phrases in the set. |
All the texts were annotated by two people. | 0 | When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 32 81. |
Their results show that their high performance NER use less training data than other systems. | 0 | Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The form fmnh, for example, can be understood as the verb “lubricated”, the possessed noun “her oil”, the adjective “fat” or the verb “got fat”. |
Subsets and Splits