source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Ends with the feminine affix :: p. 4.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
In this domain the major scenarios involve the things they agreed on, rather than the mere fact that they agreed.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
English was again paired with German, French, and Spanish.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.
This assumption, however, is not inherent to type-based tagging models.
0
The P (W |T , ψ) term in the lexicon component now decomposes as: n P (W |T , ψ) = n P (Wi|Ti, ψ) i=1 n   tions are not modeled by the standard HMM, which = n  n P (v|ψTi f ) instead can model token-level frequency.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Wu and Fung introduce an evaluation method they call nk-blind.
The AdaBoost algorithm was developed for supervised learning.
0
The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
encodes the one tag per word constraint and is uni form over type-level tag assignments.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Multiple features can be used for the same token.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
As expected, the vanilla HMM trained with EM performs the worst.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This intuition is born out by the experimental results.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Section 2 describes our baseline techniques for SMT adaptation, and section 3 describes the instance-weighting approach.
Here we present two algorithms.
0
Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.
The texts were annotated with the RSTtool.
0
7 www.cis.upenn.edu/∼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Prague Dependency Treebank (Hajiˇc et al., 2001b), Danish Dependency Treebank (Kromann, 2003), and the METU Treebank of Turkish (Oflazer et al., 2003), which generally allow annotations with nonprojective dependency structures.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
7.96 5.55 1 l...................................................................................................................................................................................................J..
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
-1 means that an NP should be ruled out as a possible antecedent, and 0 means that the knowledge source remains neutral (i.e., it has no reason to believe that they cannot be coreferent).
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
A moment's reflection will reveal that things are not quite that simple.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
For Æ = 1, a new target language word is generated using the trigram language model p(eje0; e00).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
(b) After they were released...
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
It also does not prune, so comparing to our pruned model would be unfair.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
In turn we use two sorts of heuristics, orthogonal to one another, to prune segmentation possibilities based on lexical and grammatical constraints.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
As the name implies, space is O(m) and linear in the number of entries.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
The work of the first author was supported by the Lynn and William Frankel Center for Computer Sciences.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
For example, the MCTAG shown in Figure 7 generates trees of the form shown in Figure 4b.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Modifying the Berkeley parser for Arabic is straightforward.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Semantic expectations are analogous to lexical expectations except that they represent semantic classes rather than nouns.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This revealed interesting clues about the properties of automatic and manual scoring.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.
It is probably the first analysis of Arabic parsing of this kind.
0
ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.
Their results show that their high performance NER use less training data than other systems.
0
The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.
It is probably the first analysis of Arabic parsing of this kind.
0
We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and ei􀀀1 = e0.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Examples are given in Table 4.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The sequence of states needed to carry out the word reordering example in Fig.
It is probably the first analysis of Arabic parsing of this kind.
0
Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The subtree under,; is excised from 7, the tree 7' is inserted in its place and the excised subtree is inserted below the foot of y'.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Most of these groups follow a phrase-based statistical approach to machine translation.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Smith estimates Lotus will make profit this quarter…”.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
(If the TF/IDF score of that word is below a threshold, the phrase is discarded.)
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.
This paper talks about Unsupervised Models for Named Entity Classification.
0
It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We therefore also normalized judgements on a per-sentence basis.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Otherwise, label the training data with the combined spelling/contextual decision list, then induce a final decision list from the labeled examples where all rules (regardless of strength) are added to the decision list.
The corpus was annoted with different linguitic information.
0
The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
However, if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6, some differences begin to emerge.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
However, for multinomial models like our LMs and TMs, there is a one to one correspondence between instances and features, eg the correspondence between a phrase pair (s, t) and its conditional multinomial probability p(s1t).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
A derived structure will be mapped onto a sequence zi of substrings (not necessarily contiguous in the input), and the composition operations will be mapped onto functions that can defined as follows3. f((zi,• • • , zni), (m.,• • • ,Yn3)) = (Z1, • • • , Zn3) where each z, is the concatenation of strings from z,'s and yk's.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
more frequently than is done in English.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
But we will show that the use of unlabeled data can drastically reduce the need for supervision.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Step 2.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The first row represents the average accuracy of the three parsers we combine.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
(2006).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
In contrast, NNP (proper nouns) form a large portion of vocabulary.
Here both parametric and non-parametric models are explored.
0
The first two rows of the table are baselines.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
2.4 Underspecified rhetorical structure.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies.
This paper talks about Unsupervised Models for Named Entity Classification.
0
AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
As long as the main evaluation metric is dependency accuracy per word, with state-of-the-art accuracy mostly below 90%, the penalty for not handling non-projective constructions is almost negligible.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Word frequencies are estimated by a re-estimation procedure that involves apply­ ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.
Combining multiple highly-accurate independent parsers yields promising results.
0
We used section 23 as the development set for our combining techniques, and section 22 only for final testing.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
However, using the top-level semantic classes of WordNet proved to be problematic because the class distinctions are too coarse.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
Here both parametric and non-parametric models are explored.
0
The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Although the tag distributions of the foreign words (Eq.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
To be short, we omit the target words e; e0 in the formulation of the search hypotheses.
There are clustering approaches that assign a single POS tag to each word type.
0
We tokenize MWUs and their POS tags; this reduces the tag set size to 12.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Part of the gap between resident and virtual memory is due to the time at which data was collected.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
When a comparison against previous results requires additional pre-processing, we state it explicitly to allow for the reader to replicate the reported results.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
8 1 2.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The weight on each sentence is a value in [0, 1] computed by a perceptron with Boolean features that indicate collection and genre membership.