source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.
The corpus was annoted with different linguitic information.
0
Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Formally, we define dependency graphs as follows: 3.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Analogous to the sharing of stacks in IC's, Lexical-Functional Grammar's (LFG's) use the unification of unbounded hierarchical structures.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
We do not experiment with models larger than physical memory in this paper because TPT is unreleased, factors such as disk speed are hard to replicate, and in such situations we recommend switching to a more compact representation, such as RandLM.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
This number must be less than or equal to n 􀀀 1.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Only tokens with initCaps not found in commonWords are tested against each list in Table 2.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) ∝ P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, ψ, θ, φ, w|α, β)dψdθdφ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Evaluation Metrics We report three metrics to evaluate tagging performance.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Further, we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kernel.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Figure 3 Figure 1.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
For each pair we also record the context, i.e. the phrase between the two NEs (Step1).
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
This is true of the widely used link grammar parser for English (Sleator and Temperley, 1993), which uses a dependency grammar of sorts, the probabilistic dependency parser of Eisner (1996), and more recently proposed deterministic dependency parsers (Yamada and Matsumoto, 2003; Nivre et al., 2004).
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
They contain about 200M words (25M, 110M, 40M and 19M words, respectively).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Figure 1 shows an example.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Kollege.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
I • JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- • :zhang!
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The confidence level is then used as the belief value for the knowledge source.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
0 55.3 34.
They focused on phrases which two Named Entities, and proceed in two stages.
0
For example, the phrase “'s New York-based trust unit,” is not a paraphrase of the other phrases in the “unit” set.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
If 7' is an initial tree we do not include an address on the left-hand side.
It is probably the first analysis of Arabic parsing of this kind.
0
summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add ∗n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.
These clusters are computed using an SVD variant without relying on transitional structure.
0
As is standard, we use a fixed constant K for the number of tagging states.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Segmental morphology Hebrew consists of seven particles m(“from”) f(“when”/“who”/“that”) h(“the”) w(“and”) k(“like”) l(“to”) and b(“in”). which may never appear in isolation and must always attach as prefixes to the following open-class category item we refer to as stem.
There are clustering approaches that assign a single POS tag to each word type.
0
Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (φ, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|φ, θ) = P (T , W , θ, ψ, φ, t, w|α, β) = P (T , W , ψ|β) [Lexicon]  n n  (w,t)∈(w,t) j  P (tj |φtj−1 )P (wj |tj , θtj ) P (φ, θ|T , α, β) [Parameter] P (w, t|φ, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively.
Here both parametric and non-parametric models are explored.
0
In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
The frequency of the Company – Company domain ranks 11th with 35,567 examples.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The record for wn1 stores the offset at which its extensions begin.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Given the limited number of judgements we received, we did not try to evaluate this.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
This is in contrast to dependency treebanks, e.g.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Here, the term frequency (TF) is the frequency of a word in the bag and the inverse term frequency (ITF) is the inverse of the log of the frequency in the entire corpus.
They have made use of local and global features to deal with the instances of same token in a document.
0
This might be because our features are more comprehensive than those used by Borthwick.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
How do additional ambiguities caused by devocalization affect statistical learning?
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Queries take the form p(wn|wn−1 1 ) where wn1 is an n-gram.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We construct a mapping from all the space-delimited tokens seen in the training sentences to their corresponding analyses.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Several extensions of AdaBoost for multiclass problems have been suggested (Freund and Schapire 97; Schapire and Singer 98).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Some approaches depend upon some form of con­ straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach).
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This is a standard adaptation problem for SMT.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
gaolxing4 'happy' => F.i'JF.i'J Jl!
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Chinese word segmentation can be viewed as a stochastic transduction problem.
Here both parametric and non-parametric models are explored.
0
For each experiment we gave an nonparametric and a parametric technique for combining parsers.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The last four columns in Table 3 show the distribution of nonprojective arcs with respect to the number of lifts required.
This assumption, however, is not inherent to type-based tagging models.
0
Once HMM parameters (θ, φ) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from φ.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
State will ultimately be used as context in a subsequent query.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
For queries, we uniformly sampled 10 million hits and 10 million misses.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
BABAR uses two methods to identify anaphors that can be easily and reliably resolved with their antecedent: lexical seeding and syntactic seeding.
They focused on phrases which two Named Entities, and proceed in two stages.
0
All the NE pair instances which co-occur separated by at most 4 chunks are collected along with information about their NE types and the phrase between the NEs (the ‘context’).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
In total there are O(K 2) parameters associated with the transition parameters.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Unigram lookup is dense so we use an array of probability and backoff values.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
I = 1X21 N and N is a "medium" sized number so that it is feasible to collect 0(N) unlabeled examples.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The development of the very first Hebrew Treebank (Sima’an et al., 2001) called for the exploration of general statistical parsing methods, but the application was at first limited.
This assumption, however, is not inherent to type-based tagging models.
0
2 62.2 39.
It is probably the first analysis of Arabic parsing of this kind.
0
But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
In this case, Maury Cooper is extracted.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For each caseframe, BABAR collects the semantic classes associated with the head nouns of NPs that were extracted by the caseframe.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real difference— or similarity—between treebanks.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.
This corpus has several advantages: it is annotated at different levels.
0
When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Combining multiple highly-accurate independent parsers yields promising results.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
10.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
So, who won the competition?
It is probably the first analysis of Arabic parsing of this kind.
0
21 84.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Lazy mapping reduces memory requirements by loading pages from disk only as necessary.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2.
It is probably the first analysis of Arabic parsing of this kind.
0
phrase (markContainsVerb).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The other half was replaced by other participants, so we ended up with roughly the same number.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Ex: Mr. Bush disclosed the policy by reading it...
They have made use of local and global features to deal with the instances of same token in a document.
0
To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
This assumption, however, is not inherent to type-based tagging models.
0
We refer to (T , W ) as the lexicon of a language and ψ for the parameters for their generation; ψ depends on a single hyperparameter β.