source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Given counts cn1 where e.g. c1 is the vocabulary size, total memory consumption, in bits, is Our PROBING data structure places all n-grams of the same order into a single giant hash table.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
6 Joint Segmentation and Parsing.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
The derivation trees of a MCTAG are similar to those of a TAG.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen con structions, including morphological derivatives and personal names.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
This means that the rules in our grammar are of two kinds: (a) syntactic rules relating nonterminals to a sequence of non-terminals and/or PoS tags, and (b) lexical rules relating PoS tags to lattice arcs (lexemes).
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
more frequently than is done in English.
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
However, the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT.
|
The corpus was annoted with different linguitic information.
| 0 |
2.2 Syntactic structure.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
In words, the judgements are normalized, so that the average normalized judgement per judge is 3.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
Table 2 shows the features used in the current version of the parser.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
This revealed interesting clues about the properties of automatic and manual scoring.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
Unigrams also have 64-bit overhead for vocabulary lookup.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
It did 402 queries/ms using 1.80 GB. cMemory use increased during scoring due to batch processing (MIT) or caching (Rand).
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
This group contains a large number of features (one for each token string present in the training data).
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
The natural baseline approach is to concatenate data from IN and OUT.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
• We evaluated translation from English, in addition to into English.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
This means that the PCC cannot grow particularly quickly.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.
|
The corpus was annoted with different linguitic information.
| 0 |
It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
(c) After they blindfolded the men...
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â5% F1.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
In this paper, we have presented a new, eÃcient DP-based search procedure for statistical machine translation.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
t(i).
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
The first four affixes are so-called resultative affixes: they denote some prop erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
This paper presents methods to query N-gram language models, minimizing time and space costs.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.
|
Here both parametric and non-parametric models are explored.
| 0 |
The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
2.1 Inverted Alignments.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
We use the default inference parameters.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Hyperparameter settings are sorted according to the median one-to-one metric over runs.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
The second stage links sets which involve the same pairs of individual NEs.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t).
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real differenceâ or similarityâbetween treebanks.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
This has the effect of randomly permuting vocabulary identifiers, meeting the requirements of interpolation search when vocabulary identifiers are used as keys.
|
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
| 0 |
In order to facilitate this task, we extend the set of arc labels to encode information about lifting operations.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
Similarly, for all the LCFRS's, discussed in Section 2, we can define the relationship between a structure and the sequence of substrings it spans, and the effect of the composition operations on sequences of substrings.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
yu2 'fish.'
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
All annotations are done with specific tools and in XML; each layer has its own DTD.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
(a) ;IE shi4 'be' => ;IE;IE shi4bu2-shi4 (be-not-be) 'is it?'
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
In fact, during the first rounds many of the predictions of Th., g2 are zero.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
The proof is given in (Tillmann, 2000).
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.
|
A beam search concept is applied as in speech recognition.
| 0 |
The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.
|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
| 0 |
There is a fairly large body of work on SMT adaptation.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
This paper discusses the use of unlabeled examples for the problem of named entity classification.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
The set of n-grams appearing in a model is sparse, and we want to efficiently find their associated probabilities and backoff penalties.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Vocabulary lookup is a sorted array of 64-bit word hashes.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
After tagging a large corpus with an automatic NE tagger, the method tries to find sets of paraphrases automatically without being given a seed phrase or any kinds of cue.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Such analyzers propose multiple segmentation possibilities and their corresponding analyses for a token in isolation but have no means to determine the most likely ones.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
Two issues distinguish the various proposals.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
The phrases have to be the expressions of length less than 5 chunks, appear between two NEs.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
(2009).
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Our assumption is that caseframes that co-occur in resolutions often have a 2 This normalization is performed syntactically without semantics, so the agent and patient roles are not guaranteed to hold, but they usually do in practice.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
Hence, the terminal symbols appearing in the structures that are composed are not lost (though a constant number of new symbols may be introduced).
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories).
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
ogy (Koskenniemi 1983; Antworth 1990; Tzoukermann and Liberman 1990; Karttunen, Kaplan, and Zaenen 1992; Sproat 1992); we represent the fact that ir, attaches to nouns by allowing t:-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing f,.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., âheâ vs. âJohn F. Kennedyâ).
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
If these sets do not overlap, then the words cannot be coreferent.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
D o m ai n # of ph ras es t o t a l p h r a s e s ac cu ra cy C C 7 o r m o r e 1 0 5 8 7 . 6 % 6 o r l e s s 1 0 6 6 7 . 0 % P C 7 o r m o r e 3 5 9 9 9 . 2 % 6 o r l e s s 2 5 5 6 5 . 1 % Table 1.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Compared to decoding, this task is cache-unfriendly in that repeated queries happen only as they naturally occur in text.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
0 57.3 51.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
As in boosting, the algorithm works in rounds.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
A compatible view is presented by Charniak et al. (1996) who consider the kind of probabilities a generative parser should get from a PoS tagger, and concludes that these should be P(w|t) “and nothing fancier”.3 In our setting, therefore, the Lattice is not used to induce a probability distribution on a linear context, but rather, it is used as a common-denominator of state-indexation of all segmentations possibilities of a surface form.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
An interesting observation is that while vertical markovization benefits all our models, its effect is less evident in Cohen and Smith.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
88,962 (spelling,context) pairs were extracted as training data.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
6.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
This step ensures that the most frequent terms for each domain are labeled (in case some of them are not in WordNet) and labeled with the sense most appropriate for the domain.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â 650,000 â 790,000 MENE â â 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 1 |
In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Figure 2: Order in which source positions are visited for the example given in Fig.1.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.