source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Clearly this is not the only way to estimate word-frequencies, however, and one could consider applying other methods: in partic­ ular since the problem is similar to the problem of assigning part-of-speech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags, one might consider a more sophisticated approach such as that described in Kupiec (1992); one could also use methods that depend on a small hand-tagged seed corpus, as suggested by one reviewer.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
64 94.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
There are clustering approaches that assign a single POS tag to each word type.
0
Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN).
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
For example, we may have semantic information (e.g. database query operations) associated with the productions in a grammar.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
64 76.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
(2009).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Within the RST “user community” there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
One of the strengths of the DempsterShafer model is its natural ability to recognize when several credible hypotheses are still in play.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Given an anaphor and candidate, BABAR checks (1) whether the semantic classes of the anaphor intersect with the semantic expectations of the caseframe that extracts the candidate, and (2) whether the semantic classes of the candidate intersect with the semantic ex pectations of the caseframe that extracts the anaphor.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The rewrite rules and the definition of the composition operations may be stored in the finite state control since G uses a finite number of them.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This is less effective in our setting, where IN and OUT are disparate.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Sometime, multiple words are needed, like “vice chairman”, “prime minister” or “pay for” (“pay” and “pay for” are different senses in the CC-domain).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
ICOC and CSPP contributed the greatest im provements.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Some approaches depend upon some form of con­ straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
In Table 5 we present results from small test cor­ pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Each lattice arc corresponds to a segment and its corresponding PoS tag, and a path through the lattice corresponds to a specific morphological segmentation of the utterance.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
6One of our experimental settings lacks document boundaries, and we used this approximation in both settings for consistency.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The implication of this ambiguity for a parser is that the yield of syntactic trees no longer consists of spacedelimited tokens, and the expected number of leaves in the syntactic analysis in not known in advance.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elementary tree where adjunctions occurred.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
A non-optimal analysis is shown with dotted lines in the bottom frame.
There are clustering approaches that assign a single POS tag to each word type.
0
The terms on the right-hand-side denote the type-level and token-level probability terms respectively.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
We can only compare with Grac¸a et al.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
In our grammar, features are realized as annotations to basic category labels.
Here we present two algorithms.
0
The maximum likelihood estimates (i.e., parameter values which maximize 10) can not be found analytically, but the EM algorithm can be used to hill-climb to a local maximum of the likelihood function from some initial parameter settings.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
input token, the segmentation is then performed deterministically given the 1-best analysis.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
(Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST).
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Finally, we wish to reiterate an important point.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Unfortunately, we were unable to correctly run the IRSTLM quantized variant.
It is probably the first analysis of Arabic parsing of this kind.
0
95 76.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: • Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
This is especially effective at reducing load time, since raw bytes are read directly to memory—or, as happens with repeatedly used models, are already in the disk cache.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
About half of the participants of last year’s shared task participated again.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Since guess and gold trees may now have different yields, the question of evaluation is complex.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Sparse lookup is a key subproblem of language model queries.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Email: gale@research.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
Applying the function PROJECTIVIZE to the graph in Figure 1 yields the graph in Figure 2, where the problematic arc pointing to Z has been lifted from the original head jedna to the ancestor je.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Unsupervised Models for Named Entity Classification Collins
The second algorithm builds on a boosting algorithm called AdaBoost.
0
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
Their results show that their high performance NER use less training data than other systems.
0
Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The performance was 80.99% recall and 61.83% precision.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The annotator can then “click away” those words that are here not used as connectives (such as the conjunction und (‘and’) used in lists, or many adverbials that are ambiguous between connective and discourse particle).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones.
Their results show that their high performance NER use less training data than other systems.
0
The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
I • JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- • :zhang!
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
The baseline system in Table 3 refers to the maximum entropy system that uses only local features.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Consider the following sentences: (a) Jose Maria Martinez, Roberto Lisandy, and Dino Rossy, who were staying at a Tecun Uman hotel, were kidnapped by armed men who took them to an unknown place.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
95 76.
They have made use of local and global features to deal with the instances of same token in a document.
0
Only tokens with initCaps not found in commonWords are tested against each list in Table 2.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
We now describe the CoBoost algorithm for the named entity problem.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The CFLex and CFNet knowledge sources provide positive evidence that a candidate NP and anaphor might be coreferent.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m.
The corpus was annoted with different linguitic information.
0
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
This paper conducted research in the area of automatic paraphrase discovery.
0
Also, expanding on the techniques for the automatic generation of extraction patterns (Riloff 96; Sudo 03) using our method, the extraction patterns which have the same meaning can be automatically linked, enabling us to produce the final table fully automatically.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _..
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
While the linear precedence of segmental morphemes within a token is subject to constraints, the dominance relations among their mother and sister constituents is rather free.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
2.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Alon Lavie advised on this work.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Je voudrais pr´eciser, a` l’adresse du commissaire Liikanen, qu’il n’est pas ais´e de recourir aux tribunaux nationaux.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) ≤ 20 41.9% 33.7% ≤ 40 92.4% 73.2% ≤ 63 99.7% 92.6% ≤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2–23) and the ATB (p1–3).
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
We use label propagation in two stages to generate soft labels on all the vertices in the graph.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For example, even if the contexts surrounding an anaphor and candidate match exactly, they are not coreferent if they have substantially different meanings 9 We would be happy to make our manually annotated test data available to others who also want to evaluate their coreference resolver on the MUC4 or Reuters collections.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
For example, in Information Retrieval (IR), we have to match a user’s query to the expressions in the desired documents, while in Question Answering (QA), we have to find the answer to the user’s question even if the formulation of the answer in the document is different from the question.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
1
Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
(2010) reports the best unsupervised results for English.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Keywords with more than one word In the evaluation, we explained that “chairman” and “vice chairman” are considered paraphrases.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
For instance, on Spanish, the absolute gap on median performance is 10%.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Call the crossing constituents A and B.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Indeed there are several open issues.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The pseudo-code describing the algorithm is given in Fig.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
More formally, we start by representing the dictionary D as a Weighted Finite State Trans­ ducer (WFST) (Pereira, Riley, and Sproat 1994).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
1
Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
attaching to terms denoting human beings.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Finally, a DempsterShafer probabilistic model evaluates the evidence provided by the knowledge sources for all candidate antecedents and makes the final resolution decision.
This assumption, however, is not inherent to type-based tagging models.
0
Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Nodes in the trie are based on arrays sorted by vocabulary identifier.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
3.2 The DempsterShafer Decision Model.
The AdaBoost algorithm was developed for supervised learning.
0
Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm.