source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Their results show that their high performance NER use less training data than other systems. | 0 | In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Ideally weâd like to know the thematic role of each extracted noun phrase, but AutoSlog does not generate thematic roles. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Judges varied in the average score they handed out. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | token-level HMM to reflect lexicon sparsity. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â â â 0 . 8 0 9 0.839 335 0 . 7 9 |
They found replacing it with a ranked evaluation to be more suitable. | 0 | For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Realizing gains in practice can be challenging, however, particularly when the target domain is distant from the background data. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | The baseline system in Table 3 refers to the maximum entropy system that uses only local features. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Morphological analyzers for Hebrew that analyze a surface form in isolation have been proposed by Segal (2000), Yona and Wintner (2005), and recently by the knowledge center for processing Hebrew (Itai et al., 2006). |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure. |
Here we present two algorithms. | 0 | We also removed the context-type feature type when using the CoBoost approach. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | We proposed an unsupervised method to discover paraphrases from a large untagged corpus. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | This is a simple and effective alternative to setting weights discriminatively to maximize a metric such as BLEU. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | It is not easy to make a clear definition of âparaphraseâ. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | If evidence indicates that hypotheses C and D are less likely than hypotheses A and B, then probabilities are redistributed to reflect the fact that {A, B} is more likely to contain the answer than {C, D}. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 36 79. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | We call this approach parser switching. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Different sentence structure and rich target language morphology are two reasons for this. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | vierten 12. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Proper names are assumed to be coreferent if they match exactly, or if they closely match based on a few heuristics. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Judges varied in the average score they handed out. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Word frequencies are estimated by a re-estimation procedure that involves apply ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | The sentences in the corpus were tagged by a transformation-based chunker and an NE tagger. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | (2006) developed a technique for splitting and chunking long sentences. |
This corpus has several advantages: it is annotated at different levels. | 0 | 3.4 Salience-based text generation. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | One of such approaches uses comparable documents, which are sets of documents whose content are found/known to be almost the same, such as different newspaper stories about the same event [Shinyama and Sekine 03] or different translations of the same story [Barzilay 01]. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | However there is no global pruning. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | 'Malaysia.' |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | So, there is a limitation that IE can only be performed for a predefined task, like âcorporate mergersâ or âmanagement successionâ. |
This corpus has several advantages: it is annotated at different levels. | 0 | We follow the guidelines developed in the TIGER project (Brants et al. 2002) for syntactic annotation of German newspaper text, using the Annotate3 tool for interactive construction of tree structures. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | If e < b then the key is not found. |
The texts were annotated with the RSTtool. | 0 | Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | This actually happens quite frequently (more below), so that the rankings are broad estimates. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Figure 4 shows the seven general knowledge sources (KSs) that represent features commonly used for coreference resolution. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The PCFG was trained from the same sections of the Penn Treebank as the other three parsers. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 86 78. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) â P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, Ï, θ, Ï, w|α, β)dÏdθdÏ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The goal of our research was to explore the use of contextual role knowledge for coreference resolution. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | wo rd => na m e 2. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | (c) After they blindfolded the men... |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | A second restriction on the formalisms is that choices during the derivation are independent of the context in the derivation. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | 7 www.cis.upenn.edu/â¼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | One is the accuracy within a set of phrases which share the same keyword; the other is the accuracy of links. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | But we also need an estimate of the probability for a non-occurring though possible plural form like i¥JJ1l.f, nan2gua1-men0 'pumpkins.' |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Recently, this topic has been getting more attention, as is evident from the Paraphrase Workshops in 2003 and 2004, driven by the needs of various NLP applications. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | We have developed a coreference resolver called BABAR that uses contextual role knowledge to make coreference decisions. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For the examples given in (1) and (2) this certainly seems possible. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The 8 similarity-to-IN features are based on word frequencies and scores from various models trained on the IN corpus: To avoid numerical problems, each feature was normalized by subtracting its mean and dividing by its standard deviation. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Table 6: Incremental dev set results for the manually annotated grammar (sentences of length ⤠70). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Compared with SRILM, IRSTLM adds several features: lower memory consumption, a binary file format with memory mapping, caching to increase speed, and quantization. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The system of Berg-Kirkpatrick et al. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | To explore this tradeoff, we have performed experiments with three different encoding schemes (plus a baseline), which are described schematically in Table 1. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Finally, we note that Jiang’s instance-weighting framework is broader than we have presented above, encompassing among other possibilities the use of unlabelled IN data, which is applicable to SMT settings where source-only IN corpora are available. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The dictionary sizes reported in the literature range from 17,000 to 125,000 entries, and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches, possibly more important than the particular set of methods used in the segmentation. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Base NPs are the other significant category of nominal phrases. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | 1. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The out-of-domain test set differs from the Europarl data in various ways. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | This leads to a linear combination of domain-specific probabilities, with weights in [0, 1], normalized to sum to 1. |
This corpus has several advantages: it is annotated at different levels. | 0 | The tool we use is MMAX8, which has been specifically designed for marking co-reference. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | If there are too many distinct states, the decoder prunes low-scoring partial hypotheses, possibly leading to a search error. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors. |
Here we present two algorithms. | 0 | In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example). |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | We can do that . IbmS: Yes, wonderful. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Further, the special hash 0 suffices to flag empty buckets. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | This makes memory usage comparable to our PROBING model. |
Here we present two algorithms. | 0 | Output of the learning algorithm: a function h:Xxy [0, 1] where h(x, y) is an estimate of the conditional probability p(y1x) of seeing label y given that feature x is present. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | (6), with W+ > W_. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We measured recall (Rec), precision (Pr), and the F-measure (F) with recall and precision equally weighted. |
All the texts were annotated by two people. | 0 | Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | For example, in the CC-domain, 96 keywords are found which have TF/ITF scores above a threshold; some of them are shown in Figure 3. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.