source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Table 1 shows results of the benchmark.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
However, for our purposes it is not sufficient to repre­ sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.
These clusters are computed using an SVD variant without relying on transitional structure.
0
2 62.6 45.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The present proposal falls into the last group.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
(Levinger et al., 1995; Goldberg et al., ; Adler et al., 2008)) will make the parser more robust and suitable for use in more realistic scenarios.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
For instance, the sentence Similar improvements in haemoglobin levels were reported in the scientific literature for other epoetins would likely be considered domain-specific despite the presence of general phrases like were reported in.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (φ, θ|T , α, β) = n (P (φt|α)P (θt|T , α)) t=1 The transition distribution φt for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Various verbal (e.g., �, .::) and adjectival.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
2 62.2 39.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The maximum likelihood estimates (i.e., parameter values which maximize 10) can not be found analytically, but the EM algorithm can be used to hill-climb to a local maximum of the likelihood function from some initial parameter settings.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
CHARACTERIZING STRUCTURAL DESCRIPTIONS PRODUCED BY VARIOUS GRAMMATICAL FORMALISMS*
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The NP is a complement to a preposition, which is the head of a PP.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
(1998) did make use of information from the whole document.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
In both cases, the instanceweighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline, and gains of between 0.6 and 1.8 over an equivalent mixture model (with an identical training procedure but without instance weighting).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
87 Table 7: Test set results.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.)
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The possible analyses of a surface token pose constraints on the analyses of specific segments.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Out of those 15 links, 4 are errors, namely “buy - pay”, “acquire - pay”, “purchase - stake” “acquisition - stake”.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
A position is presented by the word at that position.
The corpus was annoted with different linguitic information.
0
Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause).
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
This approach needs a phrase as an initial seed and thus the possible relationships to be extracted are naturally limited.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
(Thus the domain of the dev and test corpora matches IN.)
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Other kinds of productive word classes, such as company names, abbreviations (termed fijsuolxie3 in Mandarin), and place names can easily be 20 Note that 7 in E 7 is normally pronounced as leO, but as part of a resultative it is liao3..
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The string pumping lemma for CFG's (uvwxy-theorem) can be seen as a corollary of this lemma. from this pumping lemma: a single path can be pumped independently.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In these examples, the names identified by the two systems (if any) are underlined; the sentence with the correct segmentation is boxed.19 The differences in performance between the two systems relate directly to three issues, which can be seen as differences in the tuning of the models, rather than repre­ senting differences in the capabilities of the model per se.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The Verbmobil task is an appointment scheduling task.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
This work was supported in part by the Swedish Research Council (621-2002-4207).
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Inflectional features marking pronominal elements may be attached to different kinds of categories marking their pronominal complements.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Supervised methods have been applied quite successfully to the full MUC named-entity task (Bikel et al. 97).
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Many packages perform language model queries.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
As was explained in the results section, “strength” or “add” are not desirable keywords in the CC-domain.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Unlike Germann et al. (2009), we chose a model size so that all benchmarks fit comfortably in main memory.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
3 These are not full case frames in the traditional sense, but they approximate a simple case frame with a single slot.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
3.1 Corpora.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.
Here both parametric and non-parametric models are explored.
0
The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Most similar is scrolling queries, wherein left-to-right queries that add one word at a time are optimized.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
i=1 (f,v)∈Wi
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
If it is made up of all capital letters, then (allCaps, zone) is set to 1.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The data structure was populated with 64-bit integers sampled uniformly without replacement.
The texts were annotated with the RSTtool.
0
One key issue here is to seek a discourse-based model of information structure.
This paper talks about Unsupervised Models for Named Entity Classification.
0
We again adopt an approach where we alternate between two classifiers: one classifier is modified while the other remains fixed.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
), and thosethat begin with a verb (� ub..i �u _..
This topic has been getting more attention, driven by the needs of various NLP applications.
0
If the expression is a word or a short phrase (like “corporation” and “company”), it is called a “synonym”.
These clusters are computed using an SVD variant without relying on transitional structure.
0
The final model tions.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts.
They found replacing it with a ranked evaluation to be more suitable.
0
While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
07 80.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Unfortunately, we were unable to correctly run the IRSTLM quantized variant.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The web-based Annis imports data in a variety of XML formats and tagsets and displays it in a tier-orientedway (optionally, trees can be drawn more ele gantly in a separate window).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
For E(ni1s), then, we substitute a smooth S against the number of class elements.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
of Articles No.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
As suggested in Section 4.3.2, a derivation with independent paths can be divided into subcomputations with limited sharing of information.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Moving beyond directly related work, major themes in SMT adaptation include the IR (Hildebrand et al., 2005; L¨u et al., 2007; Zhao et al., 2004) and mixture (Finch and Sumita, 2008; Foster and Kuhn, 2007; Koehn and Schroeder, 2007; L¨u et al., 2007) approaches for LMs and TMs described above, as well as methods for exploiting monolingual in-domain text, typically by translating it automatically and then performing self training (Bertoldi and Federico, 2009; Ueffing et al., 2007; Schwenk and Senellart, 2009).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
+ cost(unseen(fm, as desired.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Figure 4 shows some such phrase sets based on keywords in the CC-domain.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The performance of our system on those sentences ap­ peared rather better than theirs.
This assumption, however, is not inherent to type-based tagging models.
0
Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
In speech recognition the arcs of the lattice are typically weighted in order to indicate the probability of specific transitions.
This assumption, however, is not inherent to type-based tagging models.
0
In contrast to results reported in Johnson (2007), we found that the per P (Ti|T −i, β) n (f,v)∈Wi P (v|Ti, f, W −i, T −i, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Out of those 15 links, 4 are errors, namely “buy - pay”, “acquire - pay”, “purchase - stake” “acquisition - stake”.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We use the HSPELL9 (Har’el and Kenigsberg, 2004) wordlist as a lexeme-based lexicon for pruning segmentations involving invalid segments.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Finally, Section 5 explains how BABAR relates to previous work, and Section 6 summarizes our conclusions.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
07 95.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Table 2 shows results for both settings and all methods described in sections 2 and 3.
There is no global pruning.
0
3.1 Word ReOrdering with Verbgroup.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
07 80.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .
A beam search concept is applied as in speech recognition.
0
vierten 12.
They focused on phrases which two Named Entities, and proceed in two stages.
0
However, the next step is clearly different.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
While size of the resulting transducers may seem daunting-the segmenter described here, as it is used in the Bell Labs Mandarin TTS system has about 32,000 states and 209,000 arcs-recent work on minimization of weighted machines and transducers (cf.
Here both parametric and non-parametric models are explored.
0
Two general approaches are presented and two combination techniques are described for each approach.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
16 As one reviewer points out, one problem with the unigram model chosen here is that there is still a. tendency to pick a segmentation containing fewer words.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
set was based on an earlier version of the Chang et a!.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Trees derived by IG's exhibit a property that is not exhibited by the trees sets derived by TAG's or CFG's.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
more frequently than is done in English.