source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Global features are extracted from other occurrences of the same token in the whole document. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Annotation of syntactic structure for the core corpus has just begun. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The ratio of buckets to entries is controlled by space multiplier m > 1. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Nicola Bertoldi and Marcello Federico assisted with IRSTLM. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | In order to solve this problem, a parse tree is needed to understand that âLotusâ is not the object of âestimatesâ. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | 2. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Two subjects are each given a calendar and they are asked to schedule a meeting. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | (b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | A generative model was applied (similar to naive Bayes) with the three labels as hidden vanables on unlabeled examples, and observed variables on (seed) labeled examples. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010). |
This topic has been getting more attention, driven by the needs of various NLP applications. | 1 | Recently, this topic has been getting more attention, as is evident from the Paraphrase Workshops in 2003 and 2004, driven by the needs of various NLP applications. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The rationale for treating these semantic labels differently is that they are specific and reliable (as opposed to the WordNet classes, which are more coarse and more noisy due to polysemy). |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | In contrast, NNP (proper nouns) form a large portion of vocabulary. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Some approaches depend upon some form of con straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For example, suppose one is building a ITS system for Mandarin Chinese. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | One is that smaller sets sometime have meaningless keywords, like âstrengthâ or âaddâ in the CC-domain, or âcompareâ in the PC-domain. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Second, BABAR performs reliable case resolution to identify anaphora that can be easily resolved using the lexical and syntactic heuristics described in Section 2.1. |
A beam search concept is applied as in speech recognition. | 0 | A search restriction especially useful for the translation direction from German to English is presented. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | i=1 (f,v)âWi |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In considering this aspect of a formalism, we hope to better understand the relationship between the structural descriptions generated by the grammars of a formalism, and the properties of semilinearity and polynomial recognizability. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005). |
There is no global pruning. | 0 | The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000). |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | They cluster NE instance pairs based on the words in the contexts using a bag- of-words method. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | If the same pair of NE instances is used with different phrases, these phrases are likely to be paraphrases. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system. |
There is no global pruning. | 0 | Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | 3.2 Results. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | On a set of 11 sentence fragments-the A set-where they reported 100% recall and precision for name identification, we had 73% recall and 80% precision. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | For the TM, this is: where cI(s, t) is the count in the IN phrase table of pair (s, t), po(s|t) is its probability under the OUT TM, and cI(t) = "s, cI(s', t). |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The first issue relates to the completeness of the base lexicon. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 4. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity. |
Here both parametric and non-parametric models are explored. | 0 | We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | For instance, on Spanish, the absolute gap on median performance is 10%. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | 2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 3.1 Lexicon Component. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009). |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | HR0011-06-C-0022. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Second, rather than relying on a division of the corpus into manually-assigned portions, we use features intended to capture the usefulness of each phrase pair. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Nicola Bertoldi and Marcello Federico assisted with IRSTLM. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | For inference, we are interested in the posterior probability over the latent variables in our model. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 2.3 Assigning Evidence Values. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Clearly, for judges h and h taking h as standard and computing the precision and recall for Jz yields the same results as taking h as the standard, and computing for h, 14 All evaluation materials, with the exception of those used for evaluating personal names were drawn. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Parser 3, the most accurate parser, was chosen 71% of the time, and Parser 1, the least accurate parser was chosen 16% of the time. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | First, a non-anaphoric NP classifier identifies definite noun phrases that are existential, using both syntactic rules and our learned existential NP recognizer (Bean and Riloff, 1999), and removes them from the resolution process. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Each feature group can be made up of many binary features. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 9 61.0 44. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Our clue is the NE instance pairs. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The pseudo-code describing the algorithm is given in Fig. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | May 1995). |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | Its only purpose is 3 This follows since each θt has St â 1 parameters and. |
Here both parametric and non-parametric models are explored. | 0 | A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.' |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | From the definition of TAG's, it follows that the choice of adjunction is not dependent on the history of the derivation. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | All commentaries have been tagged with part-of-speech information using Brantsâ TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.) |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Our System Wang, Li, and Chang a. 1\!f!IP Eflltii /1\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. "*:t: w _t ff 1 "* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 'music by Chen Zhongshen ' huang2rong2 youlyoul de dao4 'Huang Rong said soberly' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 'after the county president You Qing had assumed the position' lin2 quan2 'Lin Quan' wang2jian4 'Wang Jian' oulyang2-ke4 'Ouyang Ke' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 'because it cannot permit Taiwan Independence so' silfa3-yuan4zhang3 lin2yang2-gang3 'president of the Judicial Yuan, Lin Yanggang' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol 'Lin Zhanghu will give an ex planation live' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 'in two years the distributed money will stop' gaoltangl da4chi2 ye1zi0 fen3 'chicken stock, a tablespoon of coconut flakes' you2qingl ru4zhu3 xian4fu3 lwu4 'after You Qing headed the county government' Table 5 Performance on morphological analysis. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | We can now compare this algorithm to that of (Yarowsky 95). |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | These two properties of the tree sets are not only linguistically relevant, but also have computational importance. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We refer to this process as Reliable Case Resolution because it involves finding cases of anaphora that can be easily resolved with their antecedents. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | 4 Evaluation Results. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (Specifically, the limit n starts at 5 and increases by 5 at each iteration.) |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | gaolxing4 'happy' => F.i'JF.i'J Jl! |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | How should the absence of vowels and syntactic markers influence annotation choices and grammar development? |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Graph construction does not require any labeled data, but makes use of two similarity functions. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Evaluation results within sets Table 1 shows the evaluation result based on the number of phrases in a set. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The reference medicine for Silapo is EPREX/ERYPO, which contains epoetin alfa. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | i=1 (f,v)âWi |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003). |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Proper names are assumed to be coreferent if they match exactly, or if they closely match based on a few heuristics. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | In any event, to date, we have not compared different methods for deriving the set of initial frequency estimates. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Memory mapping also allows the same model to be shared across processes on the same machine. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.