source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
There is a (costless) transition between the NC node and f,.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
For inference, we are interested in the posterior probability over the latent variables in our model.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The method reported in this paper makes use solely of unigram probabilities, and is therefore a zeroeth-order model: the cost of a particular segmentation is estimated as the sum of the costs of the individual words in the segmentation.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(student+plural) 'students,' which is derived by the affixation of the plural affix f, menD to the nounxue2shengl.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
For developing these mechanisms, the possibility to feed in hand-annotated information is very useful.
Their results show that their high performance NER use less training data than other systems.
0
This is an iterative method that improves the estimation of the parameters at each iteration.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
(See Sproat and Shih 1995.)
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
1
Up to now, most IE researchers have been creating paraphrase knowledge (or IE patterns) by hand and for specific tasks.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Contains digits.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
These derivation trees show how the composition operations were used to derive the final structures from elementary structures.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
A high-level relation is agent, which relates an animate nominal to a predicate.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The first shows how constituent features and context do not help in deciding which parser to trust.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
3 61.7 38.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
See Figure 3 for a screenshot of the evaluation tool.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Exposing this information to the decoder will lead to better hypothesis recombination.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
For all lists except locations, the lists are processed into a list of tokens (unigrams).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Given a PCFG grammar G and a lattice L with nodes n1 ... nk, we construct the weighted grammar GL as follows: for every arc (lexeme) l E L from node ni to node nj, we add to GL the rule [l --+ tni, tni+1, ... , tnj_1] with a probability of 1 (this indicates the lexeme l spans from node ni to node nj).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
(Charniak et al., 1996).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
BABAR merely identifies caseframes that frequently co-occur in coreference resolutions.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The most accurate characterization of Chinese writing is that it is morphosyllabic (DeFrancis 1984): each hanzi represents one morpheme lexically and semantically, and one syllable phonologi­ cally.
Their results show that their high performance NER use less training data than other systems.
0
Only tokens with initCaps not found in commonWords are tested against each list in Table 2.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The 1-bit sign is almost always negative and the 8-bit exponent is not fully used on the range of values, so in practice this corresponds to quantization ranging from 17 to 20 total bits.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The method shares some characteristics of the decision list algorithm presented in this paper.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
See Figure 3 for a screenshot of the evaluation tool.
The AdaBoost algorithm was developed for supervised learning.
0
.
There is no global pruning.
0
The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Æ;e00 np(jjj0; J) p(Æ) pÆ(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
2 70.7 52.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Specifically, we assume each word type W consists of feature-value pairs (f, v).
A beam search concept is applied as in speech recognition.
0
The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
A non-optimal analysis is shown with dotted lines in the bottom frame.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The compact variant uses sorted arrays instead of hash tables within each node, saving some memory, but still stores full 64-bit pointers.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
We call this pseudoprojective dependency parsing, since it is based on a notion of pseudo-projectivity (Kahane et al., 1998).
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
1 53.8 47.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The returned state s(wn1) may then be used in a followon query p(wn+1js(wn1)) that extends the previous query by one word.
A beam search concept is applied as in speech recognition.
0
The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
There has also been work using a bootstrap- ping approach [Brin 98; Agichtein and Gravano 00; Ravichandran and Hovy 02].
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Our representation of contextual roles is based on information extraction patterns that are converted into simple caseframes.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Fort= 1,...,T:
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
For example, a good classifier would identify Mrs. Frank as a person, Steptoe & Johnson as a company, and Honduras as a location.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Unlike Germann et al. (2009), we chose a model size so that all benchmarks fit comfortably in main memory.
Here we present two algorithms.
0
(Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., "speedometer" from "car") from a large corpus using hand-crafted patterns.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Much confusion has been sown about Chinese writing by the use of the term ideograph, suggesting that hanzi somehow directly represent ideas.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For all lists except locations, the lists are processed into a list of tokens (unigrams).
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
• Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
As can be seen in Figure 3, the phrases in the “agree” set include completely different relationships, which are not paraphrases.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
All commentaries have been tagged with part-of-speech information using Brants’ TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.)
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The way judgements are collected, human judges tend to use the scores to rank systems against each other.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The same form fmnh can be segmented as f-mnh, f (“that”) functioning as a reletivizer with the form mnh.
It is probably the first analysis of Arabic parsing of this kind.
0
In our grammar, features are realized as annotations to basic category labels.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
JI!
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Table 6: Incremental dev set results for the manually annotated grammar (sentences of length ≤ 70).
There are clustering approaches that assign a single POS tag to each word type.
0
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
As we have said, parse quality decreases with sentence length.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
See Figure 3 for a screenshot of the evaluation tool.
The AdaBoost algorithm was developed for supervised learning.
0
We are currently exploring such algorithms.
They have made use of local and global features to deal with the instances of same token in a document.
0
If it is made up of all capital letters, then (allCaps, zone) is set to 1.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).
It is probably the first analysis of Arabic parsing of this kind.
0
64 94.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis­ tance matrix, and plotting the first two most significant dimensions.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
This is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative IN sentences as queries, then pooling the match results.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The first value reports use immediately after loading while the second reports the increase during scoring. dBerkeleyLM is written in Java which requires memory be specified in advance.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
There is no relation between these two interpretations other then the fact that their surface forms coincide, and we argue that the only reason to prefer one analysis over the other is compositional.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
1
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The approach uses both spelling and contextual rules.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Unfortunately, we have much less data to work with than with the automatic scores.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Each word is simply tagged with the semantic classes corresponding to all of its senses.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Nodes in the trie are based on arrays sorted by vocabulary identifier.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
For alif with hamza, normalization can be seen as another level of devocalization.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The confidence level is then used as the belief value for the knowledge source.
This paper conducted research in the area of automatic paraphrase discovery.
0
Step 4.
This assumption, however, is not inherent to type-based tagging models.
0
The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This PP modifies another NP, whose head is a singular noun.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
However, it is desirable if we can separate them.