source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
na me =>1 ha nzi fa mi ly 1 ha nzi gi ve n 4.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The label for a test example with features x is then defined as In this paper we define h(x, y) as the following function of counts seen in training data: Count(x,y) is the number of times feature x is seen with label y in training data, Count(x) = EyEy Count(x, y). a is a smoothing parameter, and k is the number of possible labels.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Let us consider an example of ambiguity caused by devocalization.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Also there are cases where one of the two NEs belong to a phrase outside of the relation.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
We group the features used into feature groups.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
This has the effect of randomly permuting vocabulary identifiers, meeting the requirements of interpolation search when vocabulary identifiers are used as keys.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
49 99.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
1 53.8 47.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
5.2 Setup.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
This number must be less than or equal to n 􀀀 1.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We experimented with increasingly rich grammars read off of the treebank.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
We would like to thank Ryan McDonald for numerous discussions on this topic.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This procedure is repeated for T rounds while alternating between the two classifiers.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
These 140 NE categories are designed by extending MUC’s 7 NE categories with finer sub-categories (such as Company, Institute, and Political Party for Organization; and Country, Province, and City for Location) and adding some new types of NE categories (Position Title, Product, Event, and Natural Object).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
5 70.1 58.
The AdaBoost algorithm was developed for supervised learning.
0
In this case nonalpha is the string formed by removing all upper/lower case letters from the spelling (e.g., for Thomas E. Petry nonalpha= .
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
11 taTweel (-) is an elongation character used in Arabic script to justify text.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
In sequential tagging models such as (Adler and Elhadad, 2006; Bar-Haim et al., 2007; Smith et al., 2005) weights are assigned according to a language model The input for the joint task is a sequence W = w1, ... , wn of space-delimited tokens.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (φ, θ|T , α, β) = n (P (φt|α)P (θt|T , α)) t=1 The transition distribution φt for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
In all figures, we present the per-sentence normalized judgements.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The details are given in (Tillmann, 2000).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
3.1 Maximum Entropy.
This paper conducted research in the area of automatic paraphrase discovery.
0
Step 1.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
This approach leads to a search procedure with complexity O(E3 J4).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
5.1 Parsing Models.
This corpus has several advantages: it is annotated at different levels.
0
Among the IS-units, the referring expressions are marked as such and will in the second phase receive a label for cognitive status (active, accessible- text, accessible-situation, inferrable, inactive).
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Before presenting our results, we describe the datasets that we used, as well as two baselines.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.
There is no global pruning.
0
mein 5.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
orthographic words are thus only a starting point for further analysis and can only be regarded as a useful hint at the desired division of the sentence into words.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
92 76.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
The form mnh itself can be read as at least three different verbs (“counted”, “appointed”, “was appointed”), a noun (“a portion”), and a possessed noun (“her kind”).
A beam search concept is applied as in speech recognition.
0
While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
However there is no global pruning.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
yu2 'fish.'
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Note that in our formalism a weakhypothesis can abstain.
Two general approaches are presented and two combination techniques are described for each approach.
0
In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The way judgements are collected, human judges tend to use the scores to rank systems against each other.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
The zone to which a token belongs is used as a feature.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Once HMM parameters (θ, φ) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from φ.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The set is then compared with the set generated from the Penn Treebank parse to determine the precision and recall.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We realize the importance of paraphrase; however, the major obstacle is the construction of paraphrase knowledge.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.
They have made use of local and global features to deal with the instances of same token in a document.
0
The baseline system in Table 3 refers to the maximum entropy system that uses only local features.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
All the sentences have been analyzed by our chunker and NE tag- ger.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths ≤ 70.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Each trie node contains a sorted array of entries and they use binary search.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
In our experiment, we set the threshold of the TF/ITF score empirically using a small development corpus; a finer adjustment of the threshold could reduce the number of such keywords.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
com §Cambridge, UK Email: [email protected] © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The way judgements are collected, human judges tend to use the scores to rank systems against each other.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
However, reads in the TRIE data structure are more expensive due to bit-level packing, so we found that it is faster to use interpolation search the entire time.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.
They have made use of local and global features to deal with the instances of same token in a document.
0
(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Both of the switching techniques, as well as the parametric hybridization technique were also shown to be robust when a poor parser was introduced into the experiments.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The resulting structural differences between tree- banks can account for relative differences in parsing performance.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Since trees in a tree set are adjoined together, the addressing scheme uses a sequence of pairings of the address and name of the elementary tree adjoined at that address.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Consider the case where IX].
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Human evaluation is one way to distinguish between the two cases.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The problem is a binary classification problem.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Evaluation results for links
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
£ : _ADV: 5.88 If:!
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Reading the following record’s offset indicates where the block ends.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The rationale for treating these semantic labels differently is that they are specific and reliable (as opposed to the WordNet classes, which are more coarse and more noisy due to polysemy).
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We first note that the accuracy results of our system are overall higher on their setup, on all measures, indicating that theirs may be an easier dataset.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
2 We now introduce a new algorithm for learning from unlabeled examples, which we will call DLCoTrain (DL stands for decision list, the term Cotrain is taken from (Blum and Mitchell 98)).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) ≤ 20 41.9% 33.7% ≤ 40 92.4% 73.2% ≤ 63 99.7% 92.6% ≤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2–23) and the ATB (p1–3).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
A k-tape ATM, M, has a read-only input tape and k read-write work tapes.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
5 ‘Underspecified Rhetorical Markup Language’ 6 This confirms the figure given by (Schauer, Hahn.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
2.2.4 Semantic Caseframe Expectations The third type of contextual role knowledge learned by BABAR is Semantic Caseframe Expectations.
They found replacing it with a ranked evaluation to be more suitable.
0
For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Several systems propose statistical methods for handling unknown words (Chang et al. 1992; Lin, Chiang, and Su 1993; Peng and Chang 1993).
They have made use of local and global features to deal with the instances of same token in a document.
0
If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.