source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.
Here both parametric and non-parametric models are explored.
0
Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.
There are clustering approaches that assign a single POS tag to each word type.
0
3 61.7 38.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
m(S) represents the belief that the correct hypothesis is included in S. The model assumes that evidence also arrives as a probability density function (pdf) over sets of hypotheses.6 Integrating new evidence into the existing model is therefore simply a matter of defining a function to merge pdfs, one representing the current belief system and one representing the beliefs of the new evidence.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Smith estimates Lotus will make profit this quarter…”.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Since foreign names can be of any length, and since their original pronunciation is effectively unlimited, the identi­ fication of such names is tricky.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Consequently, all three parsers prefer the nominal reading.
The AdaBoost algorithm was developed for supervised learning.
0
The input to AdaBoost is a set of training examples ((xi , yi), , (x„.„ yrn)).
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Cohen and Smith (2007) followed up on these results and proposed a system for joint inference of morphological and syntactic structures using factored models each designed and trained on its own.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The two knowledge sources that use semantic expectations, WordSemCFSem and CFSemCFSem, always return values of -1 or 0.
This paper conducted research in the area of automatic paraphrase discovery.
0
It is natural that the larger the data in the domain, the more keywords are found.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Schapire and Singer show that the training error is bounded above by Thus, in order to greedily minimize an upper bound on training error, on each iteration we should search for the weak hypothesis ht and the weight at that minimize Z.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).
These clusters are computed using an SVD variant without relying on transitional structure.
0
The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Memory mapping also allows the same model to be shared across processes on the same machine.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
76 16.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.
The corpus was annoted with different linguitic information.
0
That is, we can use the discourse parser on PCC texts, emulating for instance a “co-reference oracle” that adds the information from our co-reference annotations.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
A configuration of M consists of a state of the finite control, the nonblank contents of the input tape and k work tapes, and the position of each head.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods.
There are clustering approaches that assign a single POS tag to each word type.
0
Evaluation Metrics We report three metrics to evaluate tagging performance.
The texts were annotated with the RSTtool.
0
(Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST).
This paper conducted research in the area of automatic paraphrase discovery.
0
Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Consequently, we cut their evidence values in half to lessen their influence.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
30 75.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
In previous work (Bean and Riloff, 1999), we developed an unsupervised learning algorithm that automatically recognizes definite NPs that are existential without syntactic modification because their meaning is universally understood.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The first point we need to address is what type of linguistic object a hanzi repre­ sents.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Since the lattice L for a given sentence W is determined by the morphological analyzer M we have which is precisely the formula corresponding to the so-called lattice parsing familiar from speech recognition.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Formally, the DempsterShafer theory defines a probability density function m(S), where S is a set of hypotheses.
The corpus was annoted with different linguitic information.
0
Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
We have used the Java-based opennlp maximum entropy package1.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
!!:\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 • 10.03 13...
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
This variant is tested in the experiments below.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Compared to the EMEA/EP setting, the two domains in the NIST setting are less homogeneous and more similar to each other; there is also considerably more IN text available.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Figure 2 shows examples of extracted NE pair instances and their contexts.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This Good­ Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
First of all, most previous articles report perfor­ mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3.
It is probably the first analysis of Arabic parsing of this kind.
0
mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective treebanks is often neglected because of the relative scarcity of problematic constructions.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
2.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Particles are uninflected.
Here both parametric and non-parametric models are explored.
0
We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Each parse is converted into a set of constituents represented as a tuples: (label, start, end).
Combining multiple highly-accurate independent parsers yields promising results.
0
We call such a constituent an isolated constituent.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
7).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
If these sets do not overlap, then the words cannot be coreferent.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The algorithm in Fig.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
more frequently than is done in English.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Gazdar (1985) considers a restriction of IG's in which no more than one nonterminal on the right-hand-side of a production can inherit the stack from the left-hand-side.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The numbers falling into the location, person, organization categories were 186, 289 and 402 respectively.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
However, lexically similar NPs usually refer to the same entity in two cases: proper names and existential noun phrases.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.
It is probably the first analysis of Arabic parsing of this kind.
0
Particles are uninflected.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
(2010)’s richest model: optimized via either EM or LBFGS, as their relative performance depends on the language.
The texts were annotated with the RSTtool.
0
As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graph’s structures — cf.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Ex: Mr. Bush disclosed the policy by reading it...
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
A similar structure is used in speech recognition.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair.
This corpus has several advantages: it is annotated at different levels.
0
There is a ‘core corpus’ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
1
Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
It is probably the first analysis of Arabic parsing of this kind.
0
This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).
Their results show that their high performance NER use less training data than other systems.
0
For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.
They have made use of local and global features to deal with the instances of same token in a document.
0
In addition, each feature function is a binary function.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, “flowing” way rather than with abrupt shifts of attention.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).
They focused on phrases which two Named Entities, and proceed in two stages.
0
Another approach to finding paraphrases is to find phrases which take similar subjects and objects in large corpora by using mutual information of word distribution [Lin and Pantel 01].
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
2 for the accuracy of the different methods.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
BABAR employs information extraction techniques to represent and learn role relationships.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
68 96.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Figure 1 shows sample sentences from these domains, which are widely divergent.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.