source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Nonetheless there is no alternative to Reicheâs plan.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
Others depend upon various lexical heuris tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
However, there is a crucial difference: the morphological probabilities in their model come from discriminative models based on linear context.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
Again, we can compute average scores for all systems for the different language pairs (Figure 6).
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations â or groups of relations in particular configurations â are signalled by speakers with prosodic means.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
The evidence may not say anything about whether A is more likely than B, only that C and D are not likely.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
Once the lexicon has been drawn, the model proceeds similarly to the standard token-level HMM: Emission parameters θ are generated conditioned on tag assignments T . We also draw transition parameters Ï.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
4 53.7 43.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Second, we identified the 100 most frequent nouns in the training corpus and manually labeled them with semantic tags.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
The problem of coreference resolution has received considerable attention, including theoretical discourse models (e.g., (Grosz et al., 1995; Grosz and Sidner, 1998)), syntactic algorithms (e.g., (Hobbs, 1978; Lappin and Le- ass, 1994)), and supervised machine learning systems (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
The current work treats both segmental and super-segmental phenomena, yet we note that there may be more adequate ways to treat supersegmental phenomena assuming Word-Based morphology as we explore in (Tsarfaty and Goldberg, 2008).
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo].
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
In many cases, inspection of either the spelling or context alone is sufficient to classify an example.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
For rules p : A fpo such that fp is constant function, giving an elementary structure, fp is defined such that fp() = (Si ... xi() where each z is a constant string.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
See Table 2 for the tag set size of other languages.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Tables 4 and 5 show BABARâs performance when just one contextual role knowledge source is used at a time.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal Chi nese personal name, retains a foreign flavor because of liM.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
and f,.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Every token is independent of the others, and the sentence lattice is in fact a concatenation of smaller lattices, one for each token.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
The NP is a complement to a preposition, which is the head of a PP.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
Statistical methods seem particularly applicable to the problem of unknown-word identification, especially for constructions like names, where the linguistic constraints are minimal, and where one therefore wants to know not only that a particular se quence of hanzi might be a name, but that it is likely to be a name with some probabil ity.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
Previous approaches have tried to find examples that are similar to the target domain.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Moses keeps language models and many other resources in static variables, so these are still resident in memory.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Every parse π selects a specific morphological segmentation (l1...lk) (a path through the lattice).
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
The types of patterns produced by AutoSlog are outlined in (Riloff, 1996).
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
Like HG's, TAG's, and MCTAG's, members of LCFRS can manipulate structures more complex than terminal strings and use composition operations that are more complex that concatenation.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
The out-of-domain test set differs from the Europarl data in various ways.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
We have checked if there are similar verbs in other major domains, but this was the only one.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees.
|
The corpus was annoted with different linguitic information.
| 0 |
The significant drop in number of pupils will begin in the fall of 2003.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
The BerkeleyLM direct-mapped cache is in principle faster than caches implemented by RandLM and by IRSTLM, so we may write a C++ equivalent implementation as future work.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
HR0011-06-C-0022.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses.
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
The wounds are still healing.), entity-attribute (e.g., She 2001), who determined that in their corpus of German computer tests, 38% of relations were lexically signalled.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
This style of naming is never required and seems to be losing currency.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
In any event, to date, we have not compared different methods for deriving the set of initial frequency estimates.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
3 54.4 33.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
Given n-gram counts {cn}Nn=1, we use Flog2 c1] bits per vocabulary identifier and Flog2 cn] per index into the table of ngrams.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
In both cases the investigators were able to achieve significant improvements over the previous best tagging results.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
It is difficult when IN and OUT are dissimilar, as they are in the cases we study.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
In this case we are interested in finding' the maximum probability parse, ri, and Mi is the set of relevant (binary) parsing decisions made by parser i. ri is a parse selected from among the outputs of the individual parsers.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
There are two weaknesses in Chang et al.'s model, which we improve upon.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
However, some caveats are in order in comparing this method (or any method) with other approaches to seg mentation reported in the literature.
|
There is no global pruning.
| 0 |
4.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
2.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Tokens were converted to vocabulary identifiers in advance and state was carried from each query to the next.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7).
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
We have described two data structures for language modeling that achieve substantial reductions in time and memory cost.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
In contrast to results reported in Johnson (2007), we found that the per P (Ti|T âi, β) n (f,v)âWi P (v|Ti, f, W âi, T âi, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
When the parser is trained on the transformed data, it will ideally learn not only to construct projective dependency structures but also to assign arc labels that encode information about lifts.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
where the husband's family name is optionally prepended to the woman's full name; thus ;f:*lf#i xu3lin2-yan2hai3 would represent the name that Ms. Lin Yanhai would take if she married someone named Xu.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
vierten 12.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo].
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
The DempsterShafer rule for combining pdfs is: to {C}, meaning that it is 70% sure the correct hypothesis is C. The intersection of these sets is the null set because these beliefs are contradictory.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
The returned state s(wn1) may then be used in a followon query p(wn+1js(wn1)) that extends the previous query by one word.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
For example Chen and Liu (1992) report precision and recall rates of over 99%, but this counts only the words that occur in the test corpus that also occur in their dictionary.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
On each language we investigate the contribution of each component of our model.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
The number of top-ranked pairs to retain is chosen to optimize dev-set BLEU score.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
This leads to word- and constituent-boundaries discrepancy, which breaks the assumptions underlying current state-of-the-art statistical parsers.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
One of such approaches uses comparable documents, which are sets of documents whose content are found/known to be almost the same, such as different newspaper stories about the same event [Shinyama and Sekine 03] or different translations of the same story [Barzilay 01].
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).
|
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
| 0 |
The entire treebank is used in the experiment, but only primary dependencies are considered.4 In all experiments, punctuation tokens are included in the data but omitted in evaluation scores.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.