source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
This leads to the best reported performance for robust non-projective parsing of Czech.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Both of these analyses are shown in Figure 4; fortunately, the correct analysis is also the one with the lowest cost, so it is this analysis that is chosen.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We refer to a segment and its assigned PoS tag as a lexeme, and so analyses are in fact sequences of lexemes.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Both implementations employ a state object, opaque to the application, that carries information from one query to the next; we discuss both further in Section 4.2.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Except for the left3The graphs satisfy all the well-formedness conditions given in section 2 except (possibly) connectedness.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
For each extension a new position is added to the coverage set.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Hyperparameter settings are sorted according to the median one-to-one metric over runs.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.
They have made use of local and global features to deal with the instances of same token in a document.
0
Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Much confusion has been sown about Chinese writing by the use of the term ideograph, suggesting that hanzi somehow directly represent ideas.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Chang of Tsinghua University, Taiwan, R.O.C., for kindly providing us with the name corpora.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
An example of a fairly low-level relation is the affix relation, which holds between a stem morpheme and an affix morpheme, such as f1 -menD (PL).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This method, one instance of which we term the "greedy algorithm" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (begin­ ning) of the sentence is reached.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Table 5: Effect of the beam threshold on the number of search errors (147 sentences).
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Such ambiguities cause discrepancies between token boundaries (indexed as white spaces) and constituent boundaries (imposed by syntactic categories) with respect to a surface form.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
While there are other obstacles to completing this idea, we believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The cost estimate, cost(i¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of i¥JJ1l.
These clusters are computed using an SVD variant without relying on transitional structure.
0
In this paper, we make a simplifying assumption of one-tag-per-word.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
If so, the CF Network reports that the anaphor and candidate may be coreferent.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
In practice, we can therefore expect a trade-off such that increasing the amount of information encoded in arc labels will cause an increase in the accuracy of the inverse transformation but a decrease in the accuracy with which the parser can construct the labeled representations.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
One knowledge source, called WordSemCFSem, is analogous to CFLex: it checks whether the anaphor and candidate antecedent are substitutable for one another, but based on their semantic classes instead of the words themselves.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Wu and Fung introduce an evaluation method they call nk-blind.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
One implementation issue deserves some elaboration.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation.
They have made use of local and global features to deal with the instances of same token in a document.
0
Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We thank Felix Hageloh (Hageloh, 2006) for providing us with this version. proposed in (Tsarfaty, 2006).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
gaolxing4 'happy' => F.i'JF.i'J Jl!
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
mein 5.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
• Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We also mark all nodes that dominate an SVO configuration (containsSVO).
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The judgements tend to be done more in form of a ranking of the different systems.
This assumption, however, is not inherent to type-based tagging models.
0
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
35 76.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
The details of the transformation procedure are slightly different depending on the encoding schemes: d↑h let the linear head be the syntactic head). target arc must have the form wl −→ wm; if no target arc is found, Head is used as backoff. must have the form wl −→ wm and no outgoing arcs of the form wm p'↓ −→ wo; no backoff.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Since the early days of statistical NLP, researchers have observed that a part-of-speech tag distribution exhibits “one tag per discourse” sparsity — words are likely to select a single predominant tag in a corpus, even when several tags are possible.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Some of the operations will be constant functions, corresponding to elementary structures, and will be written as f () = zi), where each z, is a constant, the string of terminal symbols al an,,,.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%..
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
keywords Step 3 Sets of phrases based on keywords Step 4 Links between sets of phrases All the contexts collected for a given domain are gathered in a bag and the TF/ITF scores are calculated for all the words except stopwords in the bag.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.'
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Note that it is in precision that our over­ all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
See Figure 3 for a screenshot of the evaluation tool.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
31 75.
This assumption, however, is not inherent to type-based tagging models.
0
The terms on the right-hand-side denote the type-level and token-level probability terms respectively.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
2.1 Reliable Case Resolutions.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
For example, the two NEs “Eastern Group Plc” and “Hanson Plc” have the following contexts.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.
The corpus was annoted with different linguitic information.
0
Not all the layers have been produced for all the texts yet.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Table 4 shows the results.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Every parse π selects a specific morphological segmentation (l1...lk) (a path through the lattice).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.
A beam search concept is applied as in speech recognition.
0
In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
There is a ‘core corpus’ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
2.
There are clustering approaches that assign a single POS tag to each word type.
0
Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
41.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The first row represents the average accuracy of the three parsers we combine.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
This paper describes the several performance techniques used and presents benchmarks against alternative implementations.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
1
At present, the ‘Potsdam Commentary Corpus’ (henceforth ‘PCC’ for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either Æ = 0 or Æ = 1 new target words.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Table 4 shows translation results for the three approaches.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We currently simulate this by crafting a WCFG and feeding it to BitPar.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Among these are words derived by various productive processes, including: 1.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Many hanzi have more than one pronunciation, where the correct.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
(c) After they blindfolded the men...
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
(c) Coordination ambiguity is shown in dependency scores by e.g., ∗SSS R) and ∗NP NP NP R).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The implication of this ambiguity for a parser is that the yield of syntactic trees no longer consists of spacedelimited tokens, and the expected number of leaves in the syntactic analysis in not known in advance.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The method halves the error rate in comparison to a method using the labeled examples alone.
The AdaBoost algorithm was developed for supervised learning.
0
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
It also uses less memory, with 8 bytes of overhead per entry (we store 16-byte entries with m = 1.5); linked list implementations hash set and unordered require at least 8 bytes per entry for pointers.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We realize the importance of paraphrase; however, the major obstacle is the construction of paraphrase knowledge.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
A step of an ATM consists of reading a symbol from each tape and optionally moving each head to the left or right one tape cell.