source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
Combining multiple highly-accurate independent parsers yields promising results.
0
Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
A less canonical representation of segmental morphology is triggered by a morpho-phonological process of omitting the definite article h when occurring after the particles b or l. This process triggers ambiguity as for the definiteness status of Nouns following these particles.We refer to such cases in which the concatenation of elements does not strictly correspond to the original surface form as super-segmental morphology.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Conditioned on T , features of word types W are drawn.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The 3rd block contains the mixture baselines.
Replacing this with a ranked evaluation seems to be more suitable.
0
For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
To explore this tradeoff, we have performed experiments with three different encoding schemes (plus a baseline), which are described schematically in Table 1.
They have made use of local and global features to deal with the instances of same token in a document.
0
In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%..
This paper conducted research in the area of automatic paraphrase discovery.
0
For example, the phrase “'s New York-based trust unit,” is not a paraphrase of the other phrases in the “unit” set.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
paper, and is missing 6 examples from the A set.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
This feature has a linguistic justification.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.
Replacing this with a ranked evaluation seems to be more suitable.
0
We settled on contrastive evaluations of 5 system outputs for a single test sentence.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The Recency KS computes the distance between the candidate and the anaphor relative to its scope.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
To quantize, we use the binning method (Federico and Bertoldi, 2006) that sorts values, divides into equally sized bins, and averages within each bin.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
By sharing stacks (in IG's) or by using nonlinear equations over f-structures (in FUG's and LFG's), structures with unbounded dependencies between paths can be generated.
This assumption, however, is not inherent to type-based tagging models.
0
2 56.2 32.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Alternatively, h can be thought of as defining a decision list of rules x y ranked by their &quot;strength&quot; h(x, y).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The record for wn1 stores the offset at which its extensions begin.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Presence of the determiner J Al. 2.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The frequency of the Company – Company domain ranks 11th with 35,567 examples.
The AdaBoost algorithm was developed for supervised learning.
0
We first define &quot;pseudo-labels&quot;,-yt, as follows: = Yi t sign(g 0\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
In the named entity domain these rules were Each of these rules was given a strength of 0.9999.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
A list of words occurring more than 10 times in the training data is also collected (commonWords).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Lack of correct reference translations was pointed out as a short-coming of our evaluation.
Here we present two algorithms.
0
88,962 (spelling,context) pairs were extracted as training data.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
However, they list two sets, one consisting of 28 fragments and the other of 22 fragments, in which they had 0% recall and precision.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
We are interested in combining the substructures of the input parses to produce a better parse.
All the texts were annotated by two people.
0
For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement.
A beam search concept is applied as in speech recognition.
0
We can do that . IbmS: Yes, wonderful.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
A high-level relation is agent, which relates an animate nominal to a predicate.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
For instance, the sentence Similar improvements in haemoglobin levels were reported in the scientific literature for other epoetins would likely be considered domain-specific despite the presence of general phrases like were reported in.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
BABAR uses two methods to identify anaphors that can be easily and reliably resolved with their antecedent: lexical seeding and syntactic seeding.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
6 Joint Segmentation and Parsing.
There is no global pruning.
0
For the inverted alignment probability p(bijbi􀀀1; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
However, it is possible to personify any noun, so in children's stories or fables, i¥JJ1l.
This paper talks about Pseudo-Projective Dependency Parsing.
0
As expected, the most informative encoding, Head+Path, gives the highest accuracy with over 99% of all non-projective arcs being recovered correctly in both data sets.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Te rro ris m Na tur al Dis ast ers mu rde r of < NP > kill ed <p atie nt > <a ge nt > da ma ged wa s inj ure d in < NP > <a ge nt > rep ort ed <a ge nt > add ed <a ge nt > occ urr ed cau se of < NP > <a ge nt > stat ed <a ge nt > add ed <a ge nt > wr eak ed <a ge nt > cro sse d per pet rat ed <p atie nt > con de mn ed <p atie nt > dri ver of < NP > <a ge nt > car ryi ng Figure 1: Caseframe Network Examples Figure 1 shows examples of caseframes that co-occur in resolutions, both in the terrorism and natural disaster domains.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Frontier nodes are annotated by zero arty functions corresponding to elementary structures.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We use the log-linear tagger of Toutanova et al.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, “flowing” way rather than with abrupt shifts of attention.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
So, this was a surprise element due to practical reasons, not malice.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
It is not immediately obvious how to formulate an equivalent to equation (1) for an adapted TM, because there is no well-defined objective for learning TMs from parallel corpora.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) • the full text, • the annotation values for the activated annotation set (co-reference), • the actual annotation tiers, and • the portion of text currently ‘in focus’ (which also appears underlined in the full text).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Suppose -y results from the adjunction of 71, ,-y, at the k distinct tree addresses 141, , nk in some elementary tree 7', respectively.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
In addition, the restricted version of CG's (discussed in Section 6) generates tree sets with independent paths and we hope that it can be included in a more general definition of LCFRS's containing formalisms whose tree sets have path sets that are themselves LCFRL's (as in the case of the restricted indexed grammars, and the hierarchy defined by Weir).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Roughly speaking, a language, L, has the property of semilinearity if the number of occurrences of each symbol in any string is a linear combination of the occurrences of these symbols in some fixed finite set of strings.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Let 71 be a tree with root and foot labeled by X.
This corpus has several advantages: it is annotated at different levels.
0
All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988).
Replacing this with a ranked evaluation seems to be more suitable.
0
Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.
Combining multiple highly-accurate independent parsers yields promising results.
0
This is summarized in Equation 5.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Each derivation of a grammar can be represented by a generalized context-free derivation tree.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
It also uses less memory, with 8 bytes of overhead per entry (we store 16-byte entries with m = 1.5); linked list implementations hash set and unordered require at least 8 bytes per entry for pointers.
Their results show that their high performance NER use less training data than other systems.
0
The probability distribution that satisfies the above property is the one with the highest entropy.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The compressed variant uses block compression and is rather slow as a result.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
As we will see from Table 3, not much improvement is derived from this feature.
Here we present two algorithms.
0
The algorithm can be viewed as heuristically optimizing an objective function suggested by (Blum and Mitchell 98); empirically it is shown to be quite successful in optimizing this criterion.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Second, rather than relying on a division of the corpus into manually-assigned portions, we use features intended to capture the usefulness of each phrase pair.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The weight on each sentence is a value in [0, 1] computed by a perceptron with Boolean features that indicate collection and genre membership.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
We present two algorithms.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.'
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Either save money at any cost - or give priority to education.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
We are currently exploring such algorithms.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This result suggests that all of contextual role KSs can provide useful information for resolving anaphora.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We focus on phrases which connect two Named Entities (NEs), and proceed in two stages.
This assumption, however, is not inherent to type-based tagging models.
0
This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The path set of a tree set is the union of the path sets of trees in that tree set.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.