source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The model described here thus demonstrates great potential for use in widespread applications. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | This paper discusses the use of unlabeled examples for the problem of named entity classification. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Data We use the Hebrew Treebank, (Sima’an et al., 2001), provided by the knowledge center for processing Hebrew, in which sentences from the daily newspaper “Ha’aretz” are morphologically segmented and syntactically annotated. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 4.2 A Sample Segmentation Using Only Dictionary Words Figure 4 shows two possible paths from the lattice of possible analyses of the input sentence B X:Â¥ .:.S:P:l 'How do you say octopus in Japanese?' previously shown in Figure 1. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Yet we note that the better grammars without pruning outperform the poorer grammars using this technique, indicating that the syntactic context aids, to some extent, the disambiguation of unknown tokens. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | All commentaries have been tagged with part-of-speech information using Brantsâ TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.) |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts. |
Here we present two algorithms. | 0 | The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The compact variant uses sorted arrays instead of hash tables within each node, saving some memory, but still stores full 64-bit pointers. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Instead, we focused on phrases and set the frequency threshold to 2, and so were able to utilize a lot of phrases while minimizing noise. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | This representation gives ir, an appropriate morphological decomposition, pre serving information that would be lost by simply listing ir, as an unanalyzed form. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | This is done using a simple PCFG which is lexemebased. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Parser 3, the most accurate parser, was chosen 71% of the time, and Parser 1, the least accurate parser was chosen 16% of the time. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Again, this deserves further investigation. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Given an anaphor, BABAR identifies the caseframe that would extract it from its sentence. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For all lists except locations, the lists are processed into a list of tokens (unigrams). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Figure 4 Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese?'. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Extract NE pair instances with contexts From the four years of newspaper corpus, we extracted 1.9 million pairs of NE instances. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Recently, statistical NERs have achieved results that are comparable to hand-coded systems. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | An important reason for separating the two types of features is that this opens up the possibility of theoretical analysis of the use of unlabeled examples. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | In the rhetorical tree, nuclearity information is then used to extract a âkernel treeâ that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | It was also proposed to allow annotators to skip sentences that they are unable to judge. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Ex: Mr. Cristiani is the president ... |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Finally, the concatenated 5 * 20% output is used to train the reference resolution component. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Instead, we resort to an iterative update based method. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | All improvements over the baseline are statistically significant beyond the 0.01 level (McNemar’s test). |
A beam search concept is applied as in speech recognition. | 0 | These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Rather we believe several methods have to be developed using different heuristics to discover wider variety of paraphrases. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The approach assumes that the word reordering is restricted to a few positions in the source sentence. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The present proposal falls into the last group. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | The developers aimed to reduce memory consumption at the expense of time. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Our second point of comparison is with Grac¸a et al. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Whether a language even has orthographic words is largely dependent on the writing system used to represent the language (rather than the language itself); the notion "orthographic word" is not universal. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Such constraints are derived from training data, expressing some relationship between features and outcome. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The 2(Yarowsky 95) describes the use of more sophisticated smoothing methods. |
The texts were annotated with the RSTtool. | 0 | In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations â or groups of relations in particular configurations â are signalled by speakers with prosodic means. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Finally, we find links between sets of phrases, based on the NE instance pair data (for example, different phrases which link âIBMâ and âLotusâ) (Step 4). |
Here we present two algorithms. | 0 | AdaBoost is given access to a weak learning algorithm, which accepts as input the training examples, along with a distribution over the instances. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | Prague Dependency Treebank (Hajiˇc et al., 2001b), Danish Dependency Treebank (Kromann, 2003), and the METU Treebank of Turkish (Oflazer et al., 2003), which generally allow annotations with nonprojective dependency structures. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | So, there is a limitation that IE can only be performed for a predefined task, like âcorporate mergersâ or âmanagement successionâ. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003). |
Their results show that their high performance NER use less training data than other systems. | 0 | In addition, each feature function is a binary function. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | It should be clear from the onset that the particle b (“in”) in ‘bcl’ may then attach higher than the bare noun cl (“shadow”). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | In Eq. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW). |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | 971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN). |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | We call this technique constituent voting. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | We of course also fail to identify, by the methods just described, given names used without their associated family name. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | This actually happens quite frequently (more below), so that the rankings are broad estimates. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Judges where excluded from assessing the quality of MT systems that were submitted by their institution. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | This work was funded by NSF grant IRI-9502312. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993). |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | The techniques we develop can be extended in a relatively straightforward manner to the more general case when OUT consists of multiple sub-domains. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Giving a recognition algorithm for LCFRL's involves describing the substrings of the input that are spanned by the structures derived by the LCFRS's and how the composition operation combines these substrings. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | 0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | We now describe the CoBoost algorithm for the named entity problem. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Table 3 Classes of words found by ST for the test corpus. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | The way judgements are collected, human judges tend to use the scores to rank systems against each other. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Compared to related work on the recovery of long-distance dependencies in constituency-based parsing, our approach is similar to that of Dienes and Dubey (2003) in that the processing of non-local dependencies is partly integrated in the parsing process, via an extension of the set of syntactic categories, whereas most other approaches rely on postprocessing only. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | In the following, we assume that this word joining has been carried out. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | SRILM’s compact variant, IRSTLM, MITLM, and BerkeleyLM’s sorted variant are all based on this technique. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Lazy mapping reduces memory requirements by loading pages from disk only as necessary. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | This process produces a large set of caseframes coupled with a list of the noun phrases that they extracted. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t). |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | However there is no global pruning. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | For all grammars, we use fine-grained PoS tags indicating various morphological features annotated therein. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Figure 1 provides some statistics about this corpus. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | On a set of 11 sentence fragments-the A set-where they reported 100% recall and precision for name identification, we had 73% recall and 80% precision. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The first value reports resident size after loading; the second is the gap between post-loading resident memory and peak virtual memory. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | German city names are replaced by category markers. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.