source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Of course, we.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis­ tance matrix, and plotting the first two most significant dimensions.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This paper discusses the use of unlabeled examples for the problem of named entity classification.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
So, it is too costly to make IE technology “open- domain” or “on-demand” like IR or QA.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
In all figures, we present the per-sentence normalized judgements.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Semilinearity and the closely related constant growth property (a consequence of semilinearity) have been discussed in the context of grammars for natural languages by Joshi (1983/85) and Berwick and Weinberg (1984).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.
This assumption, however, is not inherent to type-based tagging models.
0
The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684).
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We define the lattice L to be the concatenation of the lattices Li corresponding to the input words wi (s.t.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
4.3 Translation Experiments.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The fraction of buckets that are empty is m−1 m , so average lookup time is O( m 1) and, crucially, constant in the number of entries.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
TPT has theoretically better locality because it stores ngrams near their suffixes, thereby placing reads for a single query in the same or adjacent pages.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The particular classifier used depends upon the noun.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In the following, we assume that this word joining has been carried out.
They focused on phrases which two Named Entities, and proceed in two stages.
0
They contain about 200M words (25M, 110M, 40M and 19M words, respectively).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.
They have made use of local and global features to deal with the instances of same token in a document.
0
In MUC6, the best result is achieved by SRA (Krupka, 1995).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Simple Type-Level Unsupervised POS Tagging
They have made use of local and global features to deal with the instances of same token in a document.
0
The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.
The corpus was annoted with different linguitic information.
0
11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.
They focused on phrases which two Named Entities, and proceed in two stages.
0
3.2 Results.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Ends with the feminine affix :: p. 4.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
We call these N − 1 words state.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
On the other hand, we can expect Head+Path to be the most useful representation for reconstructing the underlying non-projective dependency graph.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules.
A beam search concept is applied as in speech recognition.
0
In the following, we assume that this word joining has been carried out.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
This is especially true in the case of quotations—which are common in the ATB—where (1) will follow a verb like (2) (Figure 1).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Such ambiguities cause discrepancies between token boundaries (indexed as white spaces) and constituent boundaries (imposed by syntactic categories) with respect to a surface form.
They have made use of local and global features to deal with the instances of same token in a document.
0
At most one feature in this group will be set to 1.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
67 95.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).
The texts were annotated with the RSTtool.
0
The tool we use is MMAX8, which has been specifically designed for marking co-reference.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Keyword detection error Even if a keyword consists of a single word, there are words which are not desirable as keywords for a domain.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The highestorder N-gram array omits backoff and the index, since these are not applicable.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
= p(fj je) max Æ;e00 j02Cnfjg np(jjj0; J) p(Æ) pÆ(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The negative logarithm of t0 is reported.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
na me =>1 ha nzi fa mi ly 1 ha nzi gi ve n 4.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
In the ATB, :: b asta’adah is tagged 48 times as a noun and 9 times as verbal noun.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The second row represents the performance of the median hyperparameter setting.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !
Two general approaches are presented and two combination techniques are described for each approach.
0
We then show that the combining techniques presented above give better parsing accuracy than any of the individual parsers.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
We considered using the MUC6 and MUC7 data sets, but their training sets were far too small to learn reliable co-occurrence statistics for a large set of contextual role relationships.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Thus our proposed model is a proper model assigning probability mass to all (7r, L) pairs, where 7r is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Certainly these linguistic factors increase the difficulty of syntactic disambiguation.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
splitPUNC restores the convention of the WSJ.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
This section measures performance on shared tasks in order of increasing complexity: sparse lookups, evaluating perplexity of a large file, and translation with Moses.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
In application settings, this may be a profitable strategy.
Here we present two algorithms.
0
The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
English was again paired with German, French, and Spanish.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
They are set to fixed constants.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Learned Tag Prior (PRIOR) We next assume there exists a single prior distribution ψ over tag assignments drawn from DIRICHLET(β, K ).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Step 3.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Thus, we feel fairly confident that for the examples we have considered from Gan's study a solution can be incorporated, or at least approximated, within a finite-state framework.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
There are two key benefits of this model architecture.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
58 95.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
To show that the derivation tree set of a TAG is a local set, nodes are labeled by pairs consisting of the name of an elementary tree and the address at which it was adjoined, instead of labelling edges with addresses.
These clusters are computed using an SVD variant without relying on transitional structure.
0
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
(3)In sentence (1), McCann can be a person or an orga nization.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
In section 4 we evaluate these transformations with respect to projectivized dependency treebanks, and in section 5 they are applied to parser output.
There is no global pruning.
0
6.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
For each extension a new position is added to the coverage set.
Here we present two algorithms.
0
Each xii is a member of X, where X is a set of possible features.
These clusters are computed using an SVD variant without relying on transitional structure.
0
5 67.3 55.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
3 68.4 49.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
6 Conclusions.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
”).
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
lla/llb and 14a/14b respectively).
It is probably the first analysis of Arabic parsing of this kind.
0
72 78.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
6 Results and Analysis.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
For the experiments, we use a simple preprocessing step.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
For robustness reasons, the parser may output a set of dependency trees instead of a single tree. most dependent of the next input token, dependency type features are limited to tokens on the stack.