source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
Since the work tapes store integers (which can be written in binary) that never exceed the size of the input, no configuration has space exceeding 0(log n).
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
English was again paired with German, French, and Spanish.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
We can make several observations on the cause of errors.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
Training and testing is based on the Europarl corpus.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Figure 3 shows examples of semantic expectations that were learned.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
The ability to redistribute belief values across sets rather than individual hypotheses is key.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
A totally non stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended.
|
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
| 0 |
In the first part of the experiment, dependency graphs from the treebanks were projectivized using the algorithm described in section 2.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
So far the graph has been completely unlabeled.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
We can only compare with Grac¸a et al.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Quantization can be improved by jointly encoding probability and backoff.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
In addition to the automatic methods, AG, GR, and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no productively derived words or names).
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
Compared to related work on the recovery of long-distance dependencies in constituency-based parsing, our approach is similar to that of Dienes and Dubey (2003) in that the processing of non-local dependencies is partly integrated in the parsing process, via an extension of the set of syntactic categories, whereas most other approaches rely on postprocessing only.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
To initialize the graph we tag the English side of the parallel text using a supervised model.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
SRILM inefficiently stores 64-bit pointers.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Table 8a shows that the best model recovers SBAR at only 71.0% F1.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
pronunciation depends upon word affiliation: tfJ is pronounced deO when it is a prenominal modification marker, but di4 in the word §tfJ mu4di4 'goal'; fl; is normally ganl 'dry,' but qian2 in a person's given name.
|
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
| 0 |
However, this argument is only plausible if the formal framework allows non-projective dependency structures, i.e. structures where a head and its dependents may correspond to a discontinuous constituent.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
θ has a belief value of 1.0, indicating complete certainty that the correct hypothesis is included in the set, and a plausibility value of 1.0, indicating that there is no evidence for competing hypotheses.5 As evidence is collected and the likely hypotheses are whittled down, belief is redistributed to subsets of θ.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
For even larger models, we recommend RandLM; the memory consumption of the cache is not expected to grow with model size, and it has been reported to scale well.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
For definite NPs, the results are a mixed bag: some knowledge sources increased recall a little, but at the expense of some precision.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
However, there is a strong relationship between ni1s and the number of hanzi in the class.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010).
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Each model was able to produce hypotheses for all input sentences.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
We have shown that the maximum entropy framework is able to use global information directly.
|
Here both parametric and non-parametric models are explored.
| 0 |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
For example, out of 905 phrases in the CC- domain, 211 phrases contain keywords found in step 2.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
This Good Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
This is not ideal for some applications, however.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
3.1 Maximum Entropy.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
We train and test on the CoNLL-X training set.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: 'j";gil, xiao3 [lao3 shu3] 'little rat,' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 'rat,' blocking its phrasal application.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
The relativizer f(“that”) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
We maintain a separate array for each length n containing all n-gram entries sorted in suffix order.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
This design does not guarantee âstructural zeros,â but biases towards sparsity.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.
|
There is no global pruning.
| 0 |
When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
Finally, a DempsterShafer probabilistic model evaluates the evidence provided by the knowledge sources for all candidate antecedents and makes the final resolution decision.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
Table 6: Incremental dev set results for the manually annotated grammar (sentences of length ⤠70).
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
+ cost(unseen(fm, as desired.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Second, we identified the 100 most frequent nouns in the training corpus and manually labeled them with semantic tags.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Unigram lookup is dense so we use an array of probability and backoff values.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5).
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
86 78.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
This group consists of (3 total number of possible zones) features.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
18 We are grateful to ChaoHuang Chang for providing us with this set.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length ⤠40.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
needs to be in initCaps to be considered for this feature.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
The final model tions.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
We hypothesize that modeling morphological information will greatly constrain the set of possible tags, thereby further refining the representation of the tag lexicon.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
Both BBN and NYU have tagged their own data to supplement the official training data.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
In this section, we describe the seven general knowledge sources and explain how the DempsterShafer model makes resolutions.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
How should the absence of vowels and syntactic markers influence annotation choices and grammar development?
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.
|
Here we present two algorithms.
| 0 |
We again assume a training set of n examples {x1 . xri} where the first m examples have labels {y1 ... yin}, and the last (n — m) examples are unlabeled.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle).
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
(1), Pr(eI 1) is the language model, which is a trigram language model in this case.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
f, nan2gual+men0 'pumpkins' is by no means impossible.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
we perform five runs with different random initialization of sampling state.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
5 60.6 Table 3: Multilingual Results: We report token-level one-to-one and many-to-one accuracy on a variety of languages under several experimental settings (Section 5).
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
The gender, number, and scoping KSs eliminate candidates from consideration.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
5 70.1 58.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
96 75.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
(c) After they blindfolded the men...
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphorâs sentence and two preceding sentences, and for definite NPs it is the anaphorâs sentence and eight preceding sentences.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.