source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
(2009).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
TAG's can be used to give the structural descriptions discussed by Gazdar (1985) for the unbounded nested dependencies in Norwedish, for cross serial dependencies in Dutch subordinate clauses, and for the nestings of paired English complementizers.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
For instance, for TTS it is necessary to know that a particular sequence of hanzi is of a particular category because that knowl­ edge could affect the pronunciation; consider, for example the issues surrounding the pronunciation of ganl I qian2 discussed in Section 1.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label.
This paper conducted research in the area of automatic paraphrase discovery.
0
keywords Step 3 Sets of phrases based on keywords Step 4 Links between sets of phrases All the contexts collected for a given domain are gathered in a bag and the TF/ITF scores are calculated for all the words except stopwords in the bag.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
1
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis­ tance matrix, and plotting the first two most significant dimensions.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
To initialize the graph we tag the English side of the parallel text using a supervised model.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The baseline simply retains the original labels for all arcs, regardless of whether they have been lifted or not, and the number of distinct labels is therefore simply the number n of distinct dependency types.2 In the first encoding scheme, called Head, we use a new label d↑h for each lifted arc, where d is the dependency relation between the syntactic head and the dependent in the non-projective representation, and h is the dependency relation that the syntactic head has to its own head in the underlying structure.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Table 4 Differences in performance between our system and Wang, Li, and Chang (1992).
This paper talks about Unsupervised Models for Named Entity Classification.
0
For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
It is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situations.
There are clustering approaches that assign a single POS tag to each word type.
0
4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The second row represents the performance of the median hyperparameter setting.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
There are still some open issues to be resolved with the format, but it represents a first step.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Figure 1 depicts the lattice for a 2-words sentence bclm hneim.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.
They have made use of local and global features to deal with the instances of same token in a document.
0
This group of features attempts to capture such information.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We train and test on the CoNLL-X training set.
There are clustering approaches that assign a single POS tag to each word type.
0
2 62.6 45.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In various dialects of Mandarin certain phonetic rules apply at the word.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Thus, the language generated by a grammar of a LCFRS is semilinear.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
3.1 General Knowledge Sources.
There is no global pruning.
0
To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The final score is obtained from: max e;e0 j2fJ􀀀L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.
They focused on phrases which two Named Entities, and proceed in two stages.
0
For example, from the sentence “Mr.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We retain segmentation markers—which are consistent only in the vocalized section of the treebank—to differentiate between e.g. � “they” and � + “their.” Because we use the vocalized section, we must remove null pronoun markers.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
This paper presents a maximum entropy-based named entity recognizer (NER).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
A novel element of our model is the ability to capture type-level tag frequencies.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The dev and test sets were randomly chosen from the EMEA corpus.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7).
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Our clue is the NE instance pairs.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
computing the precision of the other's judgments relative to this standard.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Next, we describe four contextual role knowledge sources that are created from the training examples and the caseframes.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only.
The corpus was annoted with different linguitic information.
0
Then, moving from connective to connective, ConAno sometimes offers suggestions for its scope (using heuristics like ‘for sub- junctor, mark all words up to the next comma as the first segment’), which the annotator can accept with a mouseclick or overwrite, marking instead the correct scope with the mouse.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
First, we aim to explicitly characterize examples from OUT as belonging to general language or not.
Here we present two algorithms.
0
(We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.)
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
In this section, we describe how contextual role knowledge is represented and learned.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
segmentation (Table 2).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
However, we have reason to doubt Chang et al.'s performance claims.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Unsupervised Learning of Contextual Role Knowledge for Coreference Resolution
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
This family represents an attempt to generalize the properties shared by CFG's, HG's, TAG's, and MCTAG's.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
An input ABCD can be represented as an FSA as shown in Figure 2(b).
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Table 6: Example Translations for the Verbmobil task.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
All experiments use ATB parts 1–3 divided according to the canonical split suggested by Chiang et al.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Since foreign names can be of any length, and since their original pronunciation is effectively unlimited, the identi­ fication of such names is tricky.
This assumption, however, is not inherent to type-based tagging models.
0
The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |ψ).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Thus, provided at least this amount of IN data is available—as it is in our setting—adapting these weights is straightforward.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Further, we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kernel.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper-name identification; the third measures the performance on morphological analysis.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
evaluated to account for the same fraction of the data.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Somewhat surprisingly, there do not appear to be large systematic differences between linear and MAP combinations.
This assumption, however, is not inherent to type-based tagging models.
0
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
As an indication, in our core corpus, we found an average sentence length of 15.8 words and 1.8 verbs per sentence, whereas a randomly taken sample of ten commentaries from the national papers Su¨ddeutsche Zeitung and Frankfurter Allgemeine has 19.6 words and 2.1 verbs per sentence.
They have made use of local and global features to deal with the instances of same token in a document.
0
On the other hand, if it is seen as McCann Pte.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
This PP modifies another NP, whose head is a singular noun.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
For statistics on this test set, refer to Figure 1.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
For developers of Statistical Machine Translation (SMT) systems, an additional complication is the heterogeneous nature of SMT components (word-alignment model, language model, translation model, etc.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Virginia) and -sia are normally transliterated as fbSi!
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
( b ) s u p p o r t s c a n d i d a t e i f s e l e c t e d s e m a n t i c t a g s m a t c h t h o s e o f t h e a n a p h o r . Le xic al computes degree of lexical overlap b e t w e e n t h e c a n d i d a t e a n d t h e a n a p h o r . Re cen cy computes the relative distance between the c a n d i d a t e a n d t h e a n a p h o r . Sy nR ole computes relative frequency with which the c a n d i d a t e ’ s s y n t a c t i c r o l e o c c u r s i n r e s o l u t i o n s . Figure 4: General Knowledge Sources The Lexical KS returns 1 if the candidate and anaphor are identical, 0.5 if their head nouns match, and 0 otherwise.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
This is the first time that we organized a large-scale manual evaluation.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
10.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.
There is no global pruning.
0
Fall.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Statistics are printed before Moses exits and after parts of the decoder have been destroyed.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Our assumption is that caseframes that co-occur in resolutions often have a 2 This normalization is performed syntactically without semantics, so the agent and patient roles are not guaranteed to hold, but they usually do in practice.
They focused on phrases which two Named Entities, and proceed in two stages.
0
So, it is too costly to make IE technology “open- domain” or “on-demand” like IR or QA.
Their results show that their high performance NER use less training data than other systems.
0
The zone to which a token belongs is used as a feature.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Various segmentation approaches were then compared with human performance: 1.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The advantage is that we can recombine search hypotheses by dynamic programming.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
From now on all lattice arcs are tagged segments and the assignment of probability P(p —* (s, p)) to lattice arcs proceeds as usual.4 A rather pathological case is when our lexical heuristics prune away all segmentation possibilities and we remain with an empty lattice.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The model described here thus demonstrates great potential for use in widespread applications.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
However, those methods need initial seeds, so the relation between entities has to be known in advance.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Note that on some examples (around 2% of the test set) CoBoost abstained altogether; in these cases we labeled the test example with the baseline, organization, label.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
A Person-Prefix-List is compiled in an analogous way.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Each lattice arc corresponds to a segment and its corresponding PoS tag, and a path through the lattice corresponds to a specific morphological segmentation of the utterance.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary.
This paper talks about Pseudo-Projective Dependency Parsing.
0
More details on the memory-based prediction can be found in Nivre et al. (2004) and Nivre and Scholz (2004).