source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Word frequencies are estimated by a re-estimation procedure that involves apply­ ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
2.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Across all languages, high performance can be attained by selecting a single tag per word type.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
37.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
In Input: {(x1,i, Initialize: Vi, j : e(xi) = 0.
There are clustering approaches that assign a single POS tag to each word type.
0
0 68.4 48.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Figure 1 shows sample sentences from these domains, which are widely divergent.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Caching for IRSTLM is smaller at 0.09 GB resident memory, though it supports only a single thread.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
Combining multiple highly-accurate independent parsers yields promising results.
0
Let s = a + b.
Here we present two algorithms.
0
Xim, } associated with the ith example.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
In addition to the optimizations specific to each datastructure described in Section 2, we implement several general optimizations for language modeling.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
We have described two data structures for language modeling that achieve substantial reductions in time and memory cost.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
7 68.3 56.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
4.1 The Task and the Corpus.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Each connected path (l1 ... lk) E L corresponds to one morphological segmentation possibility of W. The Parser Given a sequence of input tokens W = w1 ... wn and a morphological analyzer, we look for the most probable parse tree π s.t.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The domain is general politics, economics and science.
A beam search concept is applied as in speech recognition.
0
Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For example, in the sentence that starts with “Bush put a freeze on . . .
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
Here both parametric and non-parametric models are explored.
0
Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The following three sections elaborate these different stages is more detail.
The corpus was annoted with different linguitic information.
0
When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.
A beam search concept is applied as in speech recognition.
0
The sequence of states needed to carry out the word reordering example in Fig.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The correct ambiguity resolution of the syntactic level therefore helps to resolve the morphological one, and vice versa.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
In.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
See Table 2 for the tag set size of other languages.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We smooth Prf(p —* (s, p)) for rare and OOV segments (s E l, l E L, s unseen) using a “per-tag” probability distribution over rare segments which we estimate using relative frequency estimates for once-occurring segments.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.'
They have made use of local and global features to deal with the instances of same token in a document.
0
As we will see from Table 3, not much improvement is derived from this feature.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
We have developed a coreference resolver called BABAR that uses contextual role knowledge to make coreference decisions.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Gazdar (1985) considers a restriction of IG's in which no more than one nonterminal on the right-hand-side of a production can inherit the stack from the left-hand-side.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
However, there is again local grammatical information that should favor the split in the case of (1a): both .ma3 'horse' and .ma3 lu4 are nouns, but only .ma3 is consistent with the classifier pil, the classifier for horses.21 By a similar argument, the preference for not splitting , lm could be strengthened in (lb) by the observation that the classifier 1'1* tiao2 is consistent with long or winding objects like , lm ma3lu4 'road' but not with,ma3 'horse.'
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We call this approach parser switching.
They have made use of local and global features to deal with the instances of same token in a document.
0
Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.
The corpus was annoted with different linguitic information.
0
When the connective is an adverbial, there is much less clarity as to the range of the spans.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
The most frequent NE category pairs are “Person - Person (209,236), followed by “Country - Coun- try” (95,123) and “Person - Country” (75,509).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
1 53.8 47.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
See Figure 3 for a screenshot of the evaluation tool.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
We can now compare this algorithm to that of (Yarowsky 95).
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
As is standard, we use a fixed constant K for the number of tagging states.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
), which precludes a single universal approach to adaptation.
There are clustering approaches that assign a single POS tag to each word type.
0
We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.
A beam search concept is applied as in speech recognition.
0
A position is presented by the word at that position.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., “he” vs. “John F. Kennedy”).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We quantify error categories in both evaluation settings.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Simple Type-Level Unsupervised POS Tagging
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0.
Their results show that their high performance NER use less training data than other systems.
0
We have used the Java-based opennlp maximum entropy package1.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Lexicon and OOV Handling Our data-driven morphological-analyzer proposes analyses for unknown tokens as described in Section 5.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This token may further embed into a larger utterance, e.g., ‘bcl hneim’ (literally “in-the-shadow the-pleasant”, meaning roughly “in the pleasant shadow”) in which the dominated Noun is modified by a proceeding space-delimited adjective.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
It is difficult when IN and OUT are dissimilar, as they are in the cases we study.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Here, the pruning threshold t0 = 10:0 is used.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Extract NE instance pairs with contexts First, we extract NE pair instances with their context from the corpus.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
At most one feature in this group will be set to 1.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
This group contains a large number of features (one for each token string present in the training data).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
(Carlson, Marcu 2001) responded to this situation with relatively precise (and therefore long!)
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
wo rd => na m e 2.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
One implementation issue deserves some elaboration.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
When the parser is trained on the transformed data, it will ideally learn not only to construct projective dependency structures but also to assign arc labels that encode information about lifts.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Mai.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Automatic Paraphrase Discovery based on Context and Keywords between NE Pairs
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
A consequence of the ability to generate tree sets with this property is that CC's under this definition can generate the following language which can not be generated by either TAG's or HG's.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
For the examples given in (1) and (2) this certainly seems possible.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
(See also Wu and Fung [1994].)
It is probably the first analysis of Arabic parsing of this kind.
0
Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).