source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Given a PCFG grammar G and a lattice L with nodes n1 ... nk, we construct the weighted grammar GL as follows: for every arc (lexeme) l E L from node ni to node nj, we add to GL the rule [l --+ tni, tni+1, ... , tnj_1] with a probability of 1 (this indicates the lexeme l spans from node ni to node nj).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
When finished, the whole material is written into an XML-structured annotation file.
The AdaBoost algorithm was developed for supervised learning.
0
For example, a good classifier would identify Mrs. Frank as a person, Steptoe & Johnson as a company, and Honduras as a location.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
termined by the category of the word that follows it.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
This paper talks about Pseudo-Projective Dependency Parsing.
0
The entire treebank is used in the experiment, but only primary dependencies are considered.4 In all experiments, punctuation tokens are included in the data but omitted in evaluation scores.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Finally, we would like to note that it is possible to devise similar algorithms based with other objective functions than the one given in Equ.
They focused on phrases which two Named Entities, and proceed in two stages.
0
In this paper, we will propose an unsupervised method to discover paraphrases from a large untagged corpus.
Here both parametric and non-parametric models are explored.
0
These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
All notions of word, with the exception of the orthographic word, are as relevant in Chinese as they are in English, and just as is the case in other languages, a word in Chinese may correspond to one or more symbols in the orthog 1 For a related approach to the problem of word-segrnention in Japanese, see Nagata (1994), inter alia..
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Mohri [1995]) shows promise for improving this situation.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Its correct antecedent is “a revolver”, which is extracted by the caseframe “killed with <NP>”.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Specifically, we assume each word type W consists of feature-value pairs (f, v).
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
When finished, the whole material is written into an XML-structured annotation file.
They have made use of local and global features to deal with the instances of same token in a document.
0
Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
4.1 The Task and the Corpus.
They have made use of local and global features to deal with the instances of same token in a document.
0
Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
However, we note that the final conditional estimates p(s|t) from a given phrase table maximize the likelihood of joint empirical phrase pair counts over a word-aligned corpus.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
For example, a person’s full name will match with just their last name (e.g., “George Bush” and “Bush”), and a company name will match with and without a corporate suffix (e.g., “IBM Corp.” and “IBM”).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We thank Felix Hageloh (Hageloh, 2006) for providing us with this version. proposed in (Tsarfaty, 2006).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
It can be shown that the path set of the tree set generated by a TAG G is a context-free language.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
This work was funded by NSF grant IRI-9502312.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
Not every annotator was fluent in both the source and the target language.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The latter arcs correspond to OOV words in English.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
[Hasegawa et al. 04] reported only on relation discovery, but one could easily acquire para phrases from the results.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Given the limited number of judgements we received, we did not try to evaluate this.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
In addition we report for each model its performance on goldsegmented input (GS) to indicate the upper bound 11Overt definiteness errors may be seen as a wrong feature rather than as wrong constituent and it is by now an accepted standard to report accuracy with and without such errors. for the grammars’ performance on the parsing task.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO.
This paper talks about Unsupervised Models for Named Entity Classification.
0
2.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
For developing these mechanisms, the possibility to feed in hand-annotated information is very useful.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Pseudo-Projective Dependency Parsing
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The judgement of 4 in the first case will go to a vastly better system output than in the second case.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Ex: The brigade, which attacked ...
There are clustering approaches that assign a single POS tag to each word type.
0
our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
(a) 1 § . ;m t 7 leO z h e 4 pil m a 3 lu 4 sh an g4 bi ng 4 t h i s CL (assi fier) horse w ay on sic k A SP (ec t) 'This horse got sick on the way' (b) 1§: . til y zhe4 tiao2 ma3lu4 hen3 shao3 this CL road very few 'Very few cars pass by this road' :$ chel jinglguo4 car pass by 2.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
If (wi, r, wj) E A, we say that wi is the head of wj and wj a dependent of wi.
This paper conducted research in the area of automatic paraphrase discovery.
0
This limits the number of NE category pairs to 2,000 and the number of NE pair instances to 0.63 million.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
For comparison to information-retrieval inspired baselines, eg (L¨u et al., 2007), we select sentences from OUT using language model perplexities from IN.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
00 76.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Judges varied in the average score they handed out.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
For statistics on this test set, refer to Figure 1.
A beam search concept is applied as in speech recognition.
0
We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.
They found replacing it with a ranked evaluation to be more suitable.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Out of those 15 links, 4 are errors, namely “buy - pay”, “acquire - pay”, “purchase - stake” “acquisition - stake”.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Queries detect the invalid probability, using the node only if it leads to a longer match.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The token-level term is similar to the standard HMM sampling equations found in Johnson (2007).
It is probably the first analysis of Arabic parsing of this kind.
0
Diacritics can also be used to specify grammatical relations such as case and gender.
Two general approaches are presented and two combination techniques are described for each approach.
1
Two general approaches are presented and two combination techniques are described for each approach.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
We present two algorithms.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Even when there is training data available in the domain of interest, there is often additional data from other domains that could in principle be used to improve performance.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Finally, we would like to note that it is possible to devise similar algorithms based with other objective functions than the one given in Equ.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The first value reports resident size after loading; the second is the gap between post-loading resident memory and peak virtual memory.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In IE, creating the patterns which express the requested scenario, e.g. “management succession” or “corporate merger and acquisition” is regarded as the hardest task.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Kollege.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
About half of the participants of last year’s shared task participated again.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
More recently, (Riloff and Jones 99) describe a method they term &quot;mutual bootstrapping&quot; for simultaneously constructing a lexicon and contextual extraction patterns.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
We report results for the best and median hyperparameter settings obtained in this way.
There are clustering approaches that assign a single POS tag to each word type.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
However, it is desirable if we can separate them.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Tsarfaty (2006) used a morphological analyzer (Segal, 2000), a PoS tagger (Bar-Haim et al., 2005), and a general purpose parser (Schmid, 2000) in an integrated framework in which morphological and syntactic components interact to share information, leading to improved performance on the joint task.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Microsoft’s approach uses dependency trees, others use hierarchical phrase models.
Here we present two algorithms.
0
To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For natural disasters, BABAR generated 20,479 resolutions: 11,652 from lexical seeding and 8,827 from syntactic seeding.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
4 70.4 46.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.)
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
8 57.3 +F EA TS be st me dia n 50.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
13.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The distinctions in the ATB are linguistically justified, but complicate parsing.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
A non-optimal analysis is shown with dotted lines in the bottom frame.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
So, we set a threshold that at least two examples are required to build a link.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Memory usage in PROBING is high, though SRILM is even larger, so where memory is of concern we recommend using TRIE, if it fits in memory.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Our code has been publicly available and intergrated into Moses since October 2010.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
In addition to the tapes required to store the indices, M requires one work tape for splitting the substrings.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).
It is probably the first analysis of Arabic parsing of this kind.
0
Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The resulting model is compact, efficiently learnable and linguistically expressive.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
F-measure is the harmonic mean of precision and recall, 2PR/(P + R).
A beam search concept is applied as in speech recognition.
0
For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.