source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Statistics are printed before Moses exits and after parts of the decoder have been destroyed.
Two general approaches are presented and two combination techniques are described for each approach.
0
Features and context were initially introduced into the models, but they refused to offer any gains in performance.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Compared to the EMEA/EP setting, the two domains in the NIST setting are less homogeneous and more similar to each other; there is also considerably more IN text available.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
However, lazy mapping is generally slow because queries against uncached pages must wait for the disk.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Each learner is free to pick the labels for these instances.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
However, this result is consistent with the results of ex­ periments discussed in Wu and Fung (1994).
They have made use of local and global features to deal with the instances of same token in a document.
0
ICOC and CSPP contributed the greatest im provements.
This assumption, however, is not inherent to type-based tagging models.
0
3 60.7 50.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
As such, global information from the whole context of a document is important to more accurately recognize named entities.
They have made use of local and global features to deal with the instances of same token in a document.
0
A secondary reference resolution classifier has information on the class assigned by the primary classifier.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
10) and trained both EM and L-BFGS for 1000 iterations.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We first make use of our morphological analyzer to find all segmentation possibilities by chopping off all prefix sequence possibilities (including the empty prefix) and construct a lattice off of them.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
(2003), which gives 96.8% accuracy on the test set.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Experiments are presented in section 4.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Each token may admit multiple analyses, each of which a sequence of one or more lexemes (we use li to denote a lexeme) belonging a presupposed Hebrew lexicon LEX.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The estimation of the probabilities in the model is carried out as shown in Equation 4.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Two issues distinguish the various proposals.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
There any many techniques for improving language model speed and reducing memory consumption.
All the texts were annotated by two people.
0
Indeed there are several open issues.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We first note that the accuracy results of our system are overall higher on their setup, on all measures, indicating that theirs may be an easier dataset.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind.
Their results show that their high performance NER use less training data than other systems.
0
Recently, statistical NERs have achieved results that are comparable to hand-coded systems.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
To evaluate proper-name identification, we randomly se­ lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin­ gle unambiguous answer, unlike the more general question of which segmentation is correct.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
These operations, as we see below, are restricted to be size preserving (as in the case of concatenation in CFG) which implies that they will be linear and non-erasing.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The best analysis of the corpus is taken to be the true analysis, the frequencies are re-estimated, and the algorithm is repeated until it converges.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
The 8 similarity-to-IN features are based on word frequencies and scores from various models trained on the IN corpus: To avoid numerical problems, each feature was normalized by subtracting its mean and dividing by its standard deviation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Compared to last year’s shared task, the participants represent more long-term research efforts.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, management succession systems must distinguish between a person who is fired and a person who is hired.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
We realize the importance of paraphrase; however, the major obstacle is the construction of paraphrase knowledge.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
8 1 2.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.
These clusters are computed using an SVD variant without relying on transitional structure.
0
While possible to utilize the feature-based log-linear approach described in Berg-Kirkpatrick et al.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
na me =>2 ha nzi fa mi ly 1 ha nzi gi ve n 6.1 ha nzi fa mi ly => ha nz ii 7.2 ha nzi fa mi ly => ha nzi i ha nz ij 8.1 ha nzi gi ve n => ha nz ii 9.2 ha nzi giv en => ha nzi i ha nz ij The difficulty is that given names can consist, in principle, of any hanzi or pair of hanzi, so the possible given names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Ex: The brigade, which attacked ...
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We consider two variants of Berg-Kirkpatrick et al.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Nodes in the trie are based on arrays sorted by vocabulary identifier.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(3) shows learning curves for CoBoost.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Unfortunately, we have much less data to work with than with the automatic scores.
They focused on phrases which two Named Entities, and proceed in two stages.
0
In our experiment, we set the threshold of the TF/ITF score empirically using a small development corpus; a finer adjustment of the threshold could reduce the number of such keywords.
There is no global pruning.
0
The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The cost estimate, cost(i¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of i¥JJ1l.
Here we present two algorithms.
0
This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
(Bar-Haim et al., 2007; Habash and Rambow, 2005)) and probabilities are assigned to different analyses in accordance with the likelihood of their tags (e.g., “fmnh is 30% likely to be tagged NN and 70% likely to be tagged REL+VB”).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
In this work we extended the AdaBoost.MH (Schapire and Singer 98) algorithm to the cotraining case.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).
There are clustering approaches that assign a single POS tag to each word type.
0
9 50.2 +P RI OR be st me dia n 47.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
However, there will remain a large number of words that are not readily adduced to any produc­ tive pattern and that would simply have to be added to the dictionary.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
J:j:l :zhongl :0.0 ;m,Jlong4 :0.0 (mHHaryg9tltHBI) £: _ADV: 5.98 ¥ :hua2:o.o E :_NC: 4.41 :mln2:o.o mm : guo2 : 0.0 (RopubllcofChlna) .....,.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.
A beam search concept is applied as in speech recognition.
0
A detailed description of the search procedure used is given in this patent.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Replacing this with an ranked evaluation seems to be more suitable.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Much could be done to further reduce memory consumption.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
If somewhere else in the document we see “restrictions put in place by President Bush”, then we can be surer that Bush is a name.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, there is again local grammatical information that should favor the split in the case of (1a): both .ma3 'horse' and .ma3 lu4 are nouns, but only .ma3 is consistent with the classifier pil, the classifier for horses.21 By a similar argument, the preference for not splitting , lm could be strengthened in (lb) by the observation that the classifier 1'1* tiao2 is consistent with long or winding objects like , lm ma3lu4 'road' but not with,ma3 'horse.'
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The parser builds dependency graphs by traversing the input from left to right, using a stack to store tokens that are not yet complete with respect to their dependents.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The second main result is that the pseudo-projective approach to parsing (using special arc labels to guide an inverse transformation) gives a further improvement of about one percentage point on attachment score.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
A token that is allCaps will also be initCaps.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University.
Their results show that their high performance NER use less training data than other systems.
0
This paper presents a maximum entropy-based named entity recognizer (NER).
This assumption, however, is not inherent to type-based tagging models.
0
1 2 3.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We use a patched version of BitPar allowing for direct input of probabilities instead of counts.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We plan to explore more powerful techniques for exploiting the diversity of parsing methods.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
This combination generalizes (2) and (3): we use either at = a to obtain a fixed-weight linear combination, or at = cI(t)/(cI(t) + 0) to obtain a MAP combination.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
3 60.7 50.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The composition operations in the case of CFG's are parameterized by the productions.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
If evidence indicates that hypotheses C and D are less likely than hypotheses A and B, then probabilities are redistributed to reflect the fact that {A, B} is more likely to contain the answer than {C, D}.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
level.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
The corpus was wordaligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Clearly, for judges h and h taking h as standard and computing the precision and recall for Jz yields the same results as taking h as the standard, and computing for h, 14 All evaluation materials, with the exception of those used for evaluating personal names were drawn.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The monotone search performs worst in terms of both error rates mWER and SSER.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Similar behavior is observed when adding features.