source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Based on revision 4041, we modified Moses to print process statistics before terminating.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
While size of the resulting transducers may seem daunting-the segmenter described here, as it is used in the Bell Labs Mandarin TTS system has about 32,000 states and 209,000 arcs-recent work on minimization of weighted machines and transducers (cf.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
(Thus the domain of the dev and test corpora matches IN.)
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
For a trigram language model, the partial hypotheses are of the form (e0; e; C; j).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We assume that M is in an existential state qA, with integers i1 and i2 representing zi in the (2i — 1)th and 22th work tape, for 1 < i < k. For each rule p : A fp(B, C) such that fp is mapped onto the function fp defined by the following rule. jp((xi,.. • ,rnt), (1ii, • • • • Yn3))= (Zi , • • • , Zk) M breaks xi , zk into substrings xi, , xn, and yi,...,y&quot; conforming to the definition of fp.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation).
There is no global pruning.
0
There are 13 types of extensions needed to describe the verbgroup reordering.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
x EG, has agreed to be bought by H x EG, now owned by H x H to acquire EG x H’s agreement to buy EG Three of those phrases are actually paraphrases, but sometime there could be some noise; such as the second phrase above.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
The results are given in Table 4.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
In an existential state an ATM behaves like a nondeterministic TM, accepting if one of the applicable moves leads to acceptance; in an universal state the ATM accepts if all the applicable moves lead to acceptance.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
These clusters are computed using an SVD variant without relying on transitional structure.
0
3 68.9 50.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Recently, this topic has been getting more attention, as is evident from the Paraphrase Workshops in 2003 and 2004, driven by the needs of various NLP applications.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Lack of correct reference translations was pointed out as a short-coming of our evaluation.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Given n-gram counts {cn}Nn=1, we use Flog2 c1] bits per vocabulary identifier and Flog2 cn] per index into the table of ngrams.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The details are given in (Tillmann, 2000).
The AdaBoost algorithm was developed for supervised learning.
0
The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This is because different judges focused on different language pairs.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Finally, other packages implement language model estimation while we are currently dependent on them to generate an ARPA file.
The corpus was annoted with different linguitic information.
0
The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
The iw all map variant uses a non-0 y weight on a uniform prior in p,,(s t), and outperforms a version with y = 0 (iw all) and the “flattened” variant described in section 3.2.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The first setting uses the European Medicines Agency (EMEA) corpus (Tiedemann, 2009) as IN, and the Europarl (EP) corpus (www.statmt.org/europarl) as OUT, for English/French translation in both directions.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Global features are extracted from other occurrences of the same token in the whole document.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For natural disasters, BABAR generated 20,479 resolutions: 11,652 from lexical seeding and 8,827 from syntactic seeding.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The probabilities are incorporated into the DempsterShafer model using Equation 1.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
In our model there are no such hyper-parameters, and the performance is the result of truly joint disambiguation. sults.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence.
It is probably the first analysis of Arabic parsing of this kind.
0
Variants of alif are inconsistently used in Arabic texts.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Sorted arrays store key-value pairs in an array sorted by key, incurring no space overhead.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state.
They focused on phrases which two Named Entities, and proceed in two stages.
0
The frequency of the Company – Company domain ranks 11th with 35,567 examples.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This logic applies recursively: if wnf+1 similarly does not extend and has zero log backoff, it too should be omitted, terminating with a possibly empty context.
Here we present two algorithms.
0
We can now compare this algorithm to that of (Yarowsky 95).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Queries detect the invalid probability, using the node only if it leads to a longer match.
Their results show that their high performance NER use less training data than other systems.
0
Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
One of such approaches uses comparable documents, which are sets of documents whose content are found/known to be almost the same, such as different newspaper stories about the same event [Shinyama and Sekine 03] or different translations of the same story [Barzilay 01].
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This is the first time that we organized a large-scale manual evaluation.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Among these are words derived by various productive processes, including: 1.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
In the rhetorical tree, nuclearity information is then used to extract a “kernel tree” that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We now turn our attention to the recognition of string languages generated by these formalisms (LCFRL's).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
(For some recent corpus-based work on Chinese abbreviations, see Huang, Ahrens, and Chen [1993].)
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
As the two NE categories are the same, we can’t differentiate phrases with different orders of par ticipants – whether the buying company or the to-be-bought company comes first.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) ≤ 20 41.9% 33.7% ≤ 40 92.4% 73.2% ≤ 63 99.7% 92.6% ≤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2–23) and the ATB (p1–3).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We examine both the complexity of the paths of trees in the tree sets, and the kinds of dependencies that the formalisms can impose between paths.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
We suspect that the higher precision in the disasters domain may be due to its substantially larger training corpus.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Email: gale@research.
All the texts were annotated by two people.
0
2.4 Underspecified rhetorical structure.
This paper talks about Pseudo-Projective Dependency Parsing.
0
Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Other approaches encode sparsity as a soft constraint.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
(See Sproat and Shih 1995.)
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This style of naming is never required and seems to be losing currency.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
We will evaluate various specific aspects of the segmentation, as well as the overall segmentation per­ formance.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
However, we note that the final conditional estimates p(s|t) from a given phrase table maximize the likelihood of joint empirical phrase pair counts over a word-aligned corpus.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
3 61.7 38.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Note that on some examples (around 2% of the test set) CoBoost abstained altogether; in these cases we labeled the test example with the baseline, organization, label.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Figure 1 shows sample sentences from these domains, which are widely divergent.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Morphological analyzers for Hebrew that analyze a surface form in isolation have been proposed by Segal (2000), Yona and Wintner (2005), and recently by the knowledge center for processing Hebrew (Itai et al., 2006).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Step 1.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The contextual role knowledge had the greatest impact on pronouns: +13% recall for terrorism and +15% recall for disasters, with a +1% precision gain in terrorism and a small precision drop of -3% in disasters.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
In this section, we brie y review our translation approach.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In these examples, the names identified by the two systems (if any) are underlined; the sentence with the correct segmentation is boxed.19 The differences in performance between the two systems relate directly to three issues, which can be seen as differences in the tuning of the models, rather than repre­ senting differences in the capabilities of the model per se.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
TPT has theoretically better locality because it stores ngrams near their suffixes, thereby placing reads for a single query in the same or adjacent pages.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
They cluster NE instance pairs based on the words in the contexts using a bag- of-words method.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
One of the strengths of the DempsterShafer model is its natural ability to recognize when several credible hypotheses are still in play.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Their results are then compared with the results of an automatic segmenter.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We use a patched version of BitPar allowing for direct input of probabilities instead of counts.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
We can do that . IbmS: Yes, wonderful.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
diesem 3.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
2.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In Table 5 we present results from small test cor­ pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.
The AdaBoost algorithm was developed for supervised learning.
0
(Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher.
They have made use of local and global features to deal with the instances of same token in a document.
0
A token that is allCaps will also be initCaps.