source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | To do this, the x's and y's are stored in the next 2ni + 2n2 tapes, and M goes to a universal state. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | HR0011-06-C-0022. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Preferences for constituent order (especially in languages with relatively free word order) often belong to this group. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | In contrast, our morphological probabilities are based on a unigram, lexeme-based model, and all other (local and non-local) contextual considerations are delegated to the PCFG. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | In the named entity task, X1 might be the instance space for the spelling features, X2 might be the instance space for the contextual features. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | a classifier. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Bikel et al. |
Their results show that their high performance NER use less training data than other systems. | 0 | Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones). |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | In recent years, coreference resolvers have been evaluated as part of MUC6 and MUC7 (MUC7 Proceedings, 1998). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | It is formally straightforward to extend the grammar to include these names, though it does increase the likelihood of overgeneration and we are unaware of any working systems that incorporate this type of name. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Standard SMT systems have a hierarchical parameter structure: top-level log-linear weights are used to combine a small set of complex features, interpreted as log probabilities, many of which have their own internal parameters and objectives. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The entries in such a lexicon may be thought of as meaningful surface segments paired up with their PoS tags li = (si, pi), but note that a surface segment s need not be a space-delimited token. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Consider for a set of constituents the isolated constituent precision parser metric, the portion of isolated constituents that are correctly hypothesized. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 0 57.2 43. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | MENE is then trained on 80% of the training corpus, and tested on the remaining 20%. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The unlabeled data gives many such "hints" that two features should predict the same label, and these hints turn out to be surprisingly useful when building a classifier. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | For example, one of the ATB samples was the determiner -"" ; dhalikâthat.â The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | This actually happens quite frequently (more below), so that the rankings are broad estimates. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.' |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data). |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | One of such approaches uses comparable documents, which are sets of documents whose content are found/known to be almost the same, such as different newspaper stories about the same event [Shinyama and Sekine 03] or different translations of the same story [Barzilay 01]. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | This approach is compared to another reordering scheme presented in (Berger et al., 1996). |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | For this reason, naïve Bayes classifiers are well-matched to this problem. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The F- measure score increased for both domains, reflecting a substantial increase in recall with a small decrease in precision. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | (Specifically, the limit n starts at 5 and increases by 5 at each iteration.) |
Here we present two algorithms. | 0 | Each vertex within a connected component must have the same label — in the binary classification case, we need a single labeled example to identify which component should get which label. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Hence, their relationship to formalisms such as HG's and TAG's is of interest. |
Here we present two algorithms. | 0 | 1,000 of these were picked at random, and labeled by hand to produce a test set. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | As noted, this sentence consists of four words, namely B X ri4wen2 'Japanese,' :Â¥, zhanglyu2 'octopus/ :&P:l zen3me0 'how,' and IDt shuol 'say.' |
There are clustering approaches that assign a single POS tag to each word type. | 0 | .., Tn ) T W Ï E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS Ï : Lexicon parameters θ : Token word emission parameters Ï : Token tag transition parameters Ï Ï t1 t2 θ θ w1 w2 K Ï T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Compared to last year’s shared task, the participants represent more long-term research efforts. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | This step ensures that the most frequent terms for each domain are labeled (in case some of them are not in WordNet) and labeled with the sense most appropriate for the domain. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | For pronouns, however, all of the knowledge sources increased recall, often substantially, and with little if any decrease in precision. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors). |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | We use the HSPELL9 (Har’el and Kenigsberg, 2004) wordlist as a lexeme-based lexicon for pruning segmentations involving invalid segments. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | We briefly review it here for completeness. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | But in most cases they can be used interchangably. |
Here we present two algorithms. | 0 | Of these cases, 38 were temporal expressions (either a day of the week or month of the year). |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 3.4 Salience-based text generation. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Participants and other volunteers contributed about 180 hours of labor in the manual evaluation. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Their results are then compared with the results of an automatic segmenter. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | + cost(unseen(fm, as desired. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Now we have sets of phrases which share a keyword and we have links between those sets. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 18 We are grateful to ChaoHuang Chang for providing us with this set. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Step 4. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The morphological analÂysis itself can be handled using well-known techniques from finite-state morphol 9 The initial estimates are derived from the frequencies in the corpus of the strings of hanzi making up. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3). |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | So, 1: f, xue2shengl+men0 (student+PL) 'students' occurs and we estimate its cost at 11.43; similarly we estimate the cost of f, jiang4+men0 (general+PL) 'generals' (as in 'J' f, xiao3jiang4+men0 'little generals'), at 15.02. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | It can be easily shown from Thatcher's result that the path set of every local set is a regular set. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | On a set of 11 sentence fragments-the A set-where they reported 100% recall and precision for name identification, we had 73% recall and 80% precision. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | segmentation (Table 2). |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | The second experiment is limited to data from PDT.5 The training part of the treebank was projectivized under different encoding schemes and used to train memory-based dependency parsers, which were run on the test part of the treebank, consisting of 7,507 sentences and 125,713 tokens.6 The inverse transformation was applied to the output of the parsers and the result compared to the gold standard test set. |
The texts were annotated with the RSTtool. | 0 | This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 9 66.4 47. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | For the examples given in (1) and (2) this certainly seems possible. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | A Person-Prefix-List is compiled in an analogous way. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The token-level term is similar to the standard HMM sampling equations found in Johnson (2007). |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | On the other hand, if it is seen as McCann Pte. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply âArabicâ) because of the unusual opportunity it presents for comparison to English parsing results. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | First, we learn weights on individual phrase pairs rather than sentences. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 2.2 Syntactic structure. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | We focus on phrases which connect two Named Entities (NEs), and proceed in two stages. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This is motivated by taking β po(s|t) to be the parameters of a Dirichlet prior on phrase probabilities, then maximizing posterior estimates p(s|t) given the IN corpus. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | For all languages, the vocabulary sizes increase by several thousand words. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | Je voudrais pr´eciser, a` l’adresse du commissaire Liikanen, qu’il n’est pas ais´e de recourir aux tribunaux nationaux. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË and Zema´nek, 2004; Habash and Roth, 2009). |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | An edge indicates that the two features must have the same label. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Another option is the closedsource data structures from Sheffield (Guthrie and Hepple, 2010). |
This assumption, however, is not inherent to type-based tagging models. | 0 | One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.