source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The corpus was annoted with different linguitic information.
0
• Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(See Sproat and Shih 1995.)
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For example, a story can mention “the FBI”, “the White House”, or “the weather” without any prior referent in the story.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Finally we show the combining techniques degrade very little when a poor parser is added to the set.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
In both cases, SRILM walks its trie an additional time to minimize context as mentioned in Section 4.1.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
This means that the PCC cannot grow particularly quickly.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
29 — 95.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Phrase tables were extracted from the IN and OUT training corpora (not the dev as was used for instance weighting models), and phrase pairs in the intersection of the IN and OUT phrase tables were used as positive examples, with two alternate definitions of negative examples: The classifier trained using the 2nd definition had higher accuracy on a development set.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Additionally, it works about 3 times as fast as the IBM style search.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Lilo CFG's, TAG's, and HG's the derivation tree set of a MCTAG will be a local set.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The 1-bit sign is almost always negative and the 8-bit exponent is not fully used on the range of values, so in practice this corresponds to quantization ranging from 17 to 20 total bits.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Hash tables are a common sparse mapping technique used by SRILM’s default and BerkeleyLM’s hashed variant.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
35 76.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For even larger models, storing counts (Talbot and Osborne, 2007; Pauls and Klein, 2011; Guthrie and Hepple, 2010) is a possibility.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Simple Type-Level Unsupervised POS Tagging
Here both parametric and non-parametric models are explored.
0
In both cases the investigators were able to achieve significant improvements over the previous best tagging results.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
This paper conducted research in the area of automatic paraphrase discovery.
0
All the links in the “CC-domain are shown in Step 4 in subsection 3.2.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
An examination of the subjects' bracketings confirmed that these instructions were satisfactory in yielding plausible word-sized units.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Equ.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
In turn we use two sorts of heuristics, orthogonal to one another, to prune segmentation possibilities based on lexical and grammatical constraints.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Parser 3, the most accurate parser, was chosen 71% of the time, and Parser 1, the least accurate parser was chosen 16% of the time.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
In our experiments we set the parameter values randomly, and then ran EM to convergence.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Finally, our U (unparsed) measure is used to report the number of sentences to which our system could not propose a joint analysis.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Furthermore, by inverting the transducer so that it maps from phonemic transcriptions to hanzi sequences, one can apply the segmenter to other problems, such as speech recognition (Pereira, Riley, and Sproat 1994).
This corpus has several advantages: it is annotated at different levels.
0
A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The search starts in the hypothesis (I; f;g; 0).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Obviously, the presence of a title after a potential name N increases the probability that N is in fact a name.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(−i), w, α) ∝ n P (w|Ti, t(−i), w(−i), α) (tb ,ta ) P (Ti, t(i)|T , W , t(−i), w, α, β) = P (T |tb, t(−i), α)P (ta|T , t(−i), α) −i (i) i i (−i) P (Ti|W , T −i, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(−i)where T −i denotes all type-level tag assignment ex cept Ti and t(−i) denotes all token-level tags except and w (−i) (Johnson, 2007).
They have made use of local and global features to deal with the instances of same token in a document.
0
If the token is the first word of a sentence, then this feature is set to 1.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
So, who won the competition?
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
m(S) represents the belief that the correct hypothesis is included in S. The model assumes that evidence also arrives as a probability density function (pdf) over sets of hypotheses.6 Integrating new evidence into the existing model is therefore simply a matter of defining a function to merge pdfs, one representing the current belief system and one representing the beliefs of the new evidence.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
An alternate approximation to (8) would be to let w,\(s, t) directly approximate pˆI(s, t).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The out-of-domain test set differs from the Europarl data in various ways.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
32 81.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
(2003), which gives 96.8% accuracy on the test set.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
This assumption, however, is not inherent to type-based tagging models.
0
See Section 5.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.
The AdaBoost algorithm was developed for supervised learning.
0
At first glance, the problem seems quite complex: a large number of rules is needed to cover the domain, suggesting that a large number of labeled examples is required to train an accurate classifier.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Not every annotator was fluent in both the source and the target language.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
For example, suppose one is building a ITS system for Mandarin Chinese.
This corpus has several advantages: it is annotated at different levels.
0
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
This corpus has several advantages: it is annotated at different levels.
0
The Potsdam Commentary Corpus
Replacing this with a ranked evaluation seems to be more suitable.
0
Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The resulting algorithm has a complexity of O(n!).
Here we present two algorithms.
0
(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The perplexity for the trigram language model used is 26:5.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The CoBoost algorithm just described is for the case where there are two labels: for the named entity task there are three labels, and in general it will be useful to generalize the CoBoost algorithm to the multiclass case.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
In addition, the restricted version of CG's (discussed in Section 6) generates tree sets with independent paths and we hope that it can be included in a more general definition of LCFRS's containing formalisms whose tree sets have path sets that are themselves LCFRL's (as in the case of the restricted indexed grammars, and the hierarchy defined by Weir).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Our model outperforms theirs on four out of five languages on the best hyperparameter setting and three out of five on the median setting, yielding an average absolute difference across languages of 12.9% and 3.9% for best and median settings respectively compared to their best EM or LBFGS performance.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The way we cant distinction between system performance.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
We proposed an unsupervised method to discover paraphrases from a large untagged corpus.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Due to many similarly performing systems, we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
(Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (‘communicative-weight assignment’).
These clusters are computed using an SVD variant without relying on transitional structure.
0
36.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Annotation consistency is important in any supervised learning task.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The approach builds from an initial seed set for a category, and is quite similar to the decision list approach described in (Yarowsky 95).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
basically complete, yet some improvements and extensions are still under way.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The availability of these resources guided our selection of foreign languages.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
We assume that the goal in dependency parsing is to construct a labeled dependency graph of the kind depicted in Figure 1.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Most similar is scrolling queries, wherein left-to-right queries that add one word at a time are optimized.
They have made use of local and global features to deal with the instances of same token in a document.
0
In addition, each feature function is a binary function.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Contains digits.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it.
This paper talks about Pseudo-Projective Dependency Parsing.
0
Using this encoding scheme, the arc from je to Z in Figure 2 would be assigned the label AuxP↑Sb (signifying an AuxP that has been lifted from a Sb).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Of course, we.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3.
There is no global pruning.
0
No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
3.5 Improved models of discourse.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Models that employ this strategy are denoted hsp.
Here both parametric and non-parametric models are explored.
0
Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction.