source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Note that hanzi that are not grouped into dictionary words (and are not identified as single­ hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words.
Here we present two algorithms.
0
(Specifically, the limit n starts at 5 and increases by 5 at each iteration.)
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Our smoothing procedure takes into account all the aforementioned aspects and works as follows.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
There are two weaknesses in Chang et al.'s model, which we improve upon.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
If evidence indicates that hypotheses C and D are less likely than hypotheses A and B, then probabilities are redistributed to reflect the fact that {A, B} is more likely to contain the answer than {C, D}.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Finally, a DempsterShafer probabilistic model evaluates the evidence provided by the knowledge sources for all candidate antecedents and makes the final resolution decision.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Fall.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The ATB segmentation scheme is one of many alternatives.
This corpus has several advantages: it is annotated at different levels.
0
While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Precision.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We combined evidence from four contextual role knowledge sources with evidence from seven general knowledge sources using a DempsterShafer probabilistic model.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Given around 90,000 unlabeled examples, the methods described in this paper classify names with over 91% accuracy.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The ATB segmentation scheme is one of many alternatives.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
It was filtered to retain the top 30 translations for each source phrase using the TM part of the current log-linear model.
They have made use of local and global features to deal with the instances of same token in a document.
0
A Person-Prefix-List is compiled in an analogous way.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
1
KenLM: Faster and Smaller Language Model Queries
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
It is interesting to note, however, that the ability to produce a bounded number of dependent paths (where two dependent paths can share an unbounded amount of information) does not require machinery as powerful as that used in LFG, FUG and IG's.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The baseline simply retains the original labels for all arcs, regardless of whether they have been lifted or not, and the number of distinct labels is therefore simply the number n of distinct dependency types.2 In the first encoding scheme, called Head, we use a new label d↑h for each lifted arc, where d is the dependency relation between the syntactic head and the dependent in the non-projective representation, and h is the dependency relation that the syntactic head has to its own head in the underlying structure.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Standard SMT systems have a hierarchical parameter structure: top-level log-linear weights are used to combine a small set of complex features, interpreted as log probabilities, many of which have their own internal parameters and objectives.
There is no global pruning.
0
The monotone search performs worst in terms of both error rates mWER and SSER.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The features we used can be divided into 2 classes: local and global.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
A receives a votes, and B receives b votes.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature.
These clusters are computed using an SVD variant without relying on transitional structure.
0
1 55.8 38.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Initially, the DempsterShafer model assumes that all hypotheses are equally likely, so it creates a set called θ that includes all hypotheses.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
This causes a problem for reverse trie implementations, including SRILM itself, because it leaves n+1-grams without an n-gram node pointing to them.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Table 4 shows translation results for the three approaches.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
In many cases these failures in recall would be fixed by having better estimates of the actual prob­ abilities of single-hanzi words, since our estimates are often inflated.
A beam search concept is applied as in speech recognition.
0
(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Finally, since non-projective constructions often involve long-distance dependencies, the problem is closely related to the recovery of empty categories and non-local dependencies in constituency-based parsing (Johnson, 2002; Dienes and Dubey, 2003; Jijkoun and de Rijke, 2004; Cahill et al., 2004; Levy and Manning, 2004; Campbell, 2004).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Table 1 shows four words “ 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.
The texts were annotated with the RSTtool.
0
We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure.
They focused on phrases which two Named Entities, and proceed in two stages.
0
They cluster NE instance pairs based on the words in the contexts using a bag- of-words method.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The way we cant distinction between system performance.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The best analysis of the corpus is taken to be the true analysis, the frequencies are re-estimated, and the algorithm is repeated until it converges.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Each connected path (l1 ... lk) E L corresponds to one morphological segmentation possibility of W. The Parser Given a sequence of input tokens W = w1 ... wn and a morphological analyzer, we look for the most probable parse tree π s.t.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Realizing gains in practice can be challenging, however, particularly when the target domain is distant from the background data.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Figure 4 Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese?'.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
(1998) did make use of information from the whole document.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Note that in our formalism a weakhypothesis can abstain.
These clusters are computed using an SVD variant without relying on transitional structure.
0
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
However, lexically similar NPs usually refer to the same entity in two cases: proper names and existential noun phrases.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
ments contained 322 anaphoric links.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The word joining is done on the basis of a likelihood criterion.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The joint morphological and syntactic hypothesis was first discussed in (Tsarfaty, 2006; Tsarfaty and Sima’an, 2004) and empirically explored in (Tsarfaty, 2006).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _..
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
The accuracies for link were 73% and 86% on two evaluated domains.
Replacing this with a ranked evaluation seems to be more suitable.
0
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
4 69.0 51.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
On the other hand, we are interested in the application of rhetorical analysis or ‘discourse parsing’ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Given the limited number of judgements we received, we did not try to evaluate this.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
In Section 4, we present the performance measures used and give translation results on the Verbmobil task.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Since every CFL is known to be semilinear (Parikh, 1966), in order to show semilinearity of some language, we need only show the existence of a letter equivalent CFL Our definition of LCFRS's insists that the composition operations are linear and nonerasing.
They found replacing it with a ranked evaluation to be more suitable.
0
We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
A less canonical representation of segmental morphology is triggered by a morpho-phonological process of omitting the definite article h when occurring after the particles b or l. This process triggers ambiguity as for the definiteness status of Nouns following these particles.We refer to such cases in which the concatenation of elements does not strictly correspond to the original surface form as super-segmental morphology.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Specifically, the lexicon is generated as: P (T , W |ψ) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010).
This paper conducted research in the area of automatic paraphrase discovery.
0
So, it is too costly to make IE technology “open- domain” or “on-demand” like IR or QA.
Replacing this with a ranked evaluation seems to be more suitable.
0
The judgements tend to be done more in form of a ranking of the different systems.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
It may seem surprising to some readers that the interhuman agreement scores reported here are so low.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Compared with SRILM, IRSTLM adds several features: lower memory consumption, a binary file format with memory mapping, caching to increase speed, and quantization.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Otherwise, label the training data with the combined spelling/contextual decision list, then induce a final decision list from the labeled examples where all rules (regardless of strength) are added to the decision list.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
Their results show that their high performance NER use less training data than other systems.
0
MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
(2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
”).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
At this step, we will try to link those sets, and put them into a single cluster.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
In addition we report for each model its performance on goldsegmented input (GS) to indicate the upper bound 11Overt definiteness errors may be seen as a wrong feature rather than as wrong constituent and it is by now an accepted standard to report accuracy with and without such errors. for the grammars’ performance on the parsing task.
These clusters are computed using an SVD variant without relying on transitional structure.
0
token-level HMM to reflect lexicon sparsity.
The AdaBoost algorithm was developed for supervised learning.
0
By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.