source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Since these are distinct syntactic units, they are typically segmented.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji..
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
able at http://nlp.stanford.edu/projects/arabic.shtml.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
In this subsection, we will report the results of the experiment, in terms of the number of words, phrases or clusters.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Much could be done to further reduce memory consumption.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The word joining is done on the basis of a likelihood criterion.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di­ graphernic word.
The AdaBoost algorithm was developed for supervised learning.
0
AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
73 81.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
(2010) reports the best unsupervised results for English.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
For humans, this characteristic can impede the acquisition of literacy.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The transition from f, to a final state transduces c to the grammatical tag PL with cost cost(unseen(f,)): cost(i¥JJ1l.ir,) == cost(i¥JJ1l.)
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
To do this, the x's and y's are stored in the next 2ni + 2n2 tapes, and M goes to a universal state.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Unsupervised Models for Named Entity Classification Collins
These clusters are computed using an SVD variant without relying on transitional structure.
0
3 60.7 50.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
If a phrase does not contain any keywords, the phrase is discarded.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
For the automatic scoring method BLEU, we can distinguish three quarters of the systems.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
segmentation (Table 2).
Here we present two algorithms.
0
Again, this deserves further investigation.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
An initial step of any text­ analysis task is the tokenization of the input into words.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Ex: Mr. Cristiani is the president ...
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
( b ) s u p p o r t s c a n d i d a t e i f s e l e c t e d s e m a n t i c t a g s m a t c h t h o s e o f t h e a n a p h o r . Le xic al computes degree of lexical overlap b e t w e e n t h e c a n d i d a t e a n d t h e a n a p h o r . Re cen cy computes the relative distance between the c a n d i d a t e a n d t h e a n a p h o r . Sy nR ole computes relative frequency with which the c a n d i d a t e ’ s s y n t a c t i c r o l e o c c u r s i n r e s o l u t i o n s . Figure 4: General Knowledge Sources The Lexical KS returns 1 if the candidate and anaphor are identical, 0.5 if their head nouns match, and 0 otherwise.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The size of TRIE is particularly sensitive to F1092 c11, so vocabulary filtering is quite effective at reducing model size.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Such constraints are derived from training data, expressing some relationship between features and outcome.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The morphological anal­ysis itself can be handled using well-known techniques from finite-state morphol 9 The initial estimates are derived from the frequencies in the corpus of the strings of hanzi making up.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
This is the parse that is closest to the centroid of the observed parses under the similarity metric.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This is an issue that we have not addressed at the current stage of our research.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
We allow any number of bits from 2 to 25, unlike IRSTLM (8 bits) and BerkeleyLM (17−20 bits).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The likelihood of the observed data under the model is where P(yi, xi) is defined as in (9).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
0 55.3 34.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
(c) Coordination ambiguity is shown in dependency scores by e.g., ∗SSS R) and ∗NP NP NP R).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
This withdrawal by the treasury secretary is understandable, though.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The ratio of buckets to entries is controlled by space multiplier m > 1.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The features are used to represent each example for the learning algorithm.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
We refer to this process as Reliable Case Resolution because it involves finding cases of anaphora that can be easily resolved with their antecedents.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
(e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
This withdrawal by the treasury secretary is understandable, though.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
By applying an inverse transformation to the output of the parser, arcs with non-standard labels can be lowered to their proper place in the dependency graph, giving rise 1The dependency graph has been modified to make the final period a dependent of the main verb instead of being a dependent of a special root node for the sentence. to non-projective structures.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
This allow the learners to "bootstrap" each other by filling the labels of the instances on which the other side has abstained so far.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In these examples, the names identified by the two systems (if any) are underlined; the sentence with the correct segmentation is boxed.19 The differences in performance between the two systems relate directly to three issues, which can be seen as differences in the tuning of the models, rather than repre­ senting differences in the capabilities of the model per se.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.
This corpus has several advantages: it is annotated at different levels.
0
A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
In many cases, inspection of either the spelling or context alone is sufficient to classify an example.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Co-occurrences among the particles themselves are subject to further syntactic and lexical constraints relative to the stem.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Finally, as­ suming a simple bigram backoff model, we can derive the probability estimate for the particular unseen word i¥1J1l.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
They have made use of local and global features to deal with the instances of same token in a document.
0
Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
mein 5.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
2 for the accuracy of the different methods.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Figure 3 shows examples of semantic expectations that were learned.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Section 2 describes our baseline techniques for SMT adaptation, and section 3 describes the instance-weighting approach.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Keys to the table are hashed, using for example Austin Appleby’s MurmurHash2, to integers evenly distributed over a large range.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The features are used to represent each example for the learning algorithm.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Our first algorithm is similar to Yarowsky's, but with some important modifications motivated by (Blum and Mitchell 98).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The rationale for treating these semantic labels differently is that they are specific and reliable (as opposed to the WordNet classes, which are more coarse and more noisy due to polysemy).
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
A dynamic programming recursion similar to the one in Eq. 2 is evaluated.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Previous work has attempted to incorporate such constraints into token-level models via heavy-handed modifications to inference procedure and objective function (e.g., posterior regularization and ILP decoding) (Grac¸a et al., 2009; Ravi and Knight, 2009).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Instead, we focused on phrases and set the frequency threshold to 2, and so were able to utilize a lot of phrases while minimizing noise.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The effect of the pruning threshold t0 is shown in Table 5.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The 14 general-language features embody straightforward cues: frequency, “centrality” as reflected in model scores, and lack of burstiness.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Words and punctuation that appear in brackets are considered optional.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
3.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
This causes a problem for reverse trie implementations, including SRILM itself, because it leaves n+1-grams without an n-gram node pointing to them.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Lexical-knowledge-based approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence, and picks the best segmentation from the set of possible segmentations using a probabilistic or cost­ based scoring mechanism.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.
It is probably the first analysis of Arabic parsing of this kind.
0
This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
May 1995).