source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The problem can be represented as a graph with 2N vertices corresponding to the members of X1 and X2.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
BerkeleyLM uses states to optimistically search for longer n-gram matches first and must perform twice as many random accesses to retrieve backoff information.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The relevance of the distinction between, say, phonological words and, say, dictionary words is shown by an example like rpftl_A :;!:Hfllil zhong1hua2 ren2min2 gong4he2-guo2 (China people republic) 'People's Republic of China.'
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
There has also been work using a bootstrap- ping approach [Brin 98; Agichtein and Gravano 00; Ravichandran and Hovy 02].
The resulting model is compact, efficiently learnable and linguistically expressive.
0
(2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We used this data to build an unpruned ARPA file with IRSTLM’s improved-kneser-ney option and the default three pieces.
This corpus has several advantages: it is annotated at different levels.
0
(Webber et al., 2003)).
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
We are not claiming that this method is almighty.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Many human evaluation metrics have been proposed.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Te rro ris m Na tur al Dis ast ers mu rde r of < NP > kill ed <p atie nt > <a ge nt > da ma ged wa s inj ure d in < NP > <a ge nt > rep ort ed <a ge nt > add ed <a ge nt > occ urr ed cau se of < NP > <a ge nt > stat ed <a ge nt > add ed <a ge nt > wr eak ed <a ge nt > cro sse d per pet rat ed <p atie nt > con de mn ed <p atie nt > dri ver of < NP > <a ge nt > car ryi ng Figure 1: Caseframe Network Examples Figure 1 shows examples of caseframes that co-occur in resolutions, both in the terrorism and natural disaster domains.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set.
Replacing this with a ranked evaluation seems to be more suitable.
0
In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.
The texts were annotated with the RSTtool.
0
Among the IS-units, the referring expressions are marked as such and will in the second phase receive a label for cognitive status (active, accessible- text, accessible-situation, inferrable, inactive).
They found replacing it with a ranked evaluation to be more suitable.
0
It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors.
This assumption, however, is not inherent to type-based tagging models.
0
(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.
Their results show that their high performance NER use less training data than other systems.
0
(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Although this is not a precise criterion, most cases we evaluated were relatively clear-cut.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
pronunciations of individual words; they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Equ.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
First of all, most previous articles report perfor­ mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.
The AdaBoost algorithm was developed for supervised learning.
0
With each iteration more examples are assigned labels by both classifiers, while a high level of agreement (> 94%) is maintained between them.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: • Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
There have been other kinds of efforts to discover paraphrase automatically from corpora.
This paper talks about Unsupervised Models for Named Entity Classification.
0
This left 962 examples, of which 85 were noise.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
We consider two variants of Berg-Kirkpatrick et al.
The AdaBoost algorithm was developed for supervised learning.
0
N, portion of examples on which both classifiers give a label rather than abstaining), and the proportion of these examples on which the two classifiers agree.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
This is not completely surprising, since all systems use very similar technology.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Exploiting Diversity in Natural Language Processing: Combining Parsers
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.'
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
The availability of comparable corpora is limited, which is a significant limitation on the approach.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
If 7 is an elementary tree, the derivation tree consists of a single node labeled 7.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The pseudo-code describing the algorithm is given in Fig.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.
These clusters are computed using an SVD variant without relying on transitional structure.
0
While Berg-Kirkpatrick et al.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Feature weights were set using Och’s MERT algorithm (Och, 2003).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Somewhat surprisingly, there do not appear to be large systematic differences between linear and MAP combinations.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
For verbs we add two features.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Sentence (2) and (3) help to disambiguate one way or the other.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
As the name implies, space is O(m) and linear in the number of entries.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
We have tested the translation system on the Verbmobil task (Wahlster 1993).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
JI!
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We do not attempt to identify the types of relationships that are found.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The best analysis of the corpus is taken to be the true analysis, the frequencies are re-estimated, and the algorithm is repeated until it converges.
It is probably the first analysis of Arabic parsing of this kind.
0
Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Various verbal (e.g., �, .::) and adjectival.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.
They have made use of local and global features to deal with the instances of same token in a document.
0
Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
(2009) on Portuguese (Grac¸a et al.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
All the sentences have been analyzed by our chunker and NE tag- ger.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
While Berg-Kirkpatrick et al.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Fourth, we show how to build better models for three different parsers.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
0 55.3 34.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The transitive closure of the dictionary in (a) is composed with Id(input) (b) to form the WFST (c).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal Chi­ nese personal name, retains a foreign flavor because of liM.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In figure 4, reverse relations are indicated by `*’ next to the frequency.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
By design, they readily capture regularities at the token-level.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
2.1 Overview.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Multiple features can be used for the same token.
It is probably the first analysis of Arabic parsing of this kind.
0
30 75.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We then gather all phrases with the same keyword.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
This assumption, however, is not inherent to type-based tagging models.
0
37.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , ψ) component.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
So, we set a threshold that at least two examples are required to build a link.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
(2009).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Various verbal (e.g., �, .::) and adjectival.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In any event, to date, we have not compared different methods for deriving the set of initial frequency estimates.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Here, all tokeninternal collocations of tags unseen in our training data are pruned away.