source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
The Grammar Our parser looks for the most likely tree spanning a single path through the lattice of which the yield is a sequence of lexemes.
|
Here both parametric and non-parametric models are explored.
| 0 |
This technique has the advantage of requiring no training, but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomena.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
None of the models attach the attributive adjectives correctly.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
Also, âagreeâ in the CC-domain is not a desirable keyword.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
In this paper, we have presented a new, eÃcient DP-based search procedure for statistical machine translation.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
The first two rows of the table are baselines.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Find keywords for each NE pair When we look at the contexts for each domain, we noticed that there is one or a few important words which indicate the relation between the NEs (for example, the word âunitâ for the phrase âa unit ofâ).
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
In a model we built with default settings, 1.2% of n + 1-grams were missing their ngram suffix.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
The effect of UNIQ is very small on both data sets.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
Thus our proposed model is a proper model assigning probability mass to all (7r, L) pairs, where 7r is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
In our second model GTvpi we also distinguished finite and non-finite verbs and VPs as 10Lattice parsing can be performed by special initialization of the chart in a CKY parser (Chappelier et al., 1999).
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
Table 6: Example Translations for the Verbmobil task.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
For inference, we are interested in the posterior probability over the latent variables in our model.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
In Eq.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
These systems are similar to those described by Pollard (1984) as Generalized Context-Free Grammars (GCFG's).
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
08 84.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
With each iteration more examples are assigned labels by both classifiers, while a high level of agreement (> 94%) is maintained between them.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
Both parameters depend on a single hyperparameter α.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Since we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part, the instructions were simple: subjects were to mark all places they might plausibly pause if they were reading the text aloud.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
SRILM (Stolcke, 2002) is widely used within academia.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
4 70.4 46.
|
Here we present two algorithms.
| 0 |
The approach builds from an initial seed set for a category, and is quite similar to the decision list approach described in (Yarowsky 95).
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
(2010) consistently outperforms ours on English, we obtain substantial gains across other languages.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
TRIE uses less memory and has better locality.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
For all grammars, we use fine-grained PoS tags indicating various morphological features annotated therein.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
First, the model assumes independence between the first and second hanzi of a double given name.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y).
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Table 2 Similarity matrix for segmentation judgments.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
(Levinger et al., 1995; Goldberg et al., ; Adler et al., 2008)) will make the parser more robust and suitable for use in more realistic scenarios.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
The translation direction is from German to English.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The judgements tend to be done more in form of a ranking of the different systems.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
The DempsterShafer rule for combining pdfs is: to {C}, meaning that it is 70% sure the correct hypothesis is C. The intersection of these sets is the null set because these beliefs are contradictory.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
In total 13,976 phrases are assigned to sets of phrases, and the accuracy on our evaluation data ranges from 65 to 99%, depending on the domain and the size of the sets.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: 'j";gil, xiao3 [lao3 shu3] 'little rat,' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 'rat,' blocking its phrasal application.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 1 |
While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Caching for IRSTLM is smaller at 0.09 GB resident memory, though it supports only a single thread.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
However, we have reason to doubt Chang et al.'s performance claims.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
pronunciation depends upon word affiliation: tfJ is pronounced deO when it is a prenominal modification marker, but di4 in the word §tfJ mu4di4 'goal'; fl; is normally ganl 'dry,' but qian2 in a person's given name.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
Thus, any language that is letter equivalent to a semilinear language is also semilinear.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
123 examples fell into the noise category.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
Before we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
The additional morphological material in such cases appears after the stem and realizes the extended meaning.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
Hence, we take the probability of the event fmnh analyzed as REL VB to be This means that we generate f and mnh independently depending on their corresponding PoS tags, and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. based on linear context.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
In addition to the automatic methods, AG, GR, and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no productively derived words or names).
|
The texts were annotated with the RSTtool.
| 0 |
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
We call this pseudoprojective dependency parsing, since it is based on a notion of pseudo-projectivity (Kahane et al., 1998).
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
They return a value in the range [0,1], where 0 indicates neutrality and 1 indicates the strongest belief that the candidate and anaphor are coreferent.
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
4.1 Dictionary Representation.
|
The texts were annotated with the RSTtool.
| 0 |
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
CFG's, TAG's, MCTAG's and HG's are all members of this class since they satisfy these restrictions.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Table 3 Classes of words found by ST for the test corpus.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
76 16.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
76 16.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
Initially, we planned to compare the semantic classes of an anaphor and a candidate and infer that they might be coreferent if their semantic classes intersected.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
42 nator, the N31s can be measured well by counting, and we replace the expectation by the observation.
|
Here both parametric and non-parametric models are explored.
| 0 |
IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Secondly, for all our models we provide better fine- and coarse-grained POS-tagging accuracy, and all pruned models outperform the Oracle results reported by them.12 In terms of syntactic disambiguation, even the simplest grammar pruned with HSPELL outperforms their non-Oracle results.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
If they knew that the first four words in a hypergraph node would never extend to the left and form a 5-gram, then three or even fewer words could be kept in the backward state.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
pre-processing.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
yu2 'fish.'
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
Our approach uses word-to-word dependencies between source and target words.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 .
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
We evaluated BABAR on two domains: terrorism and natural disasters.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
In our model, we associate these features at the type-level in the lexicon.
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
Before presenting our results, we describe the datasets that we used, as well as two baselines.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
For terrorism, BABAR generated 5,078 resolutions: 2,386 from lexical seeding and 2,692 from syntactic seeding.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
The first four affixes are so-called resultative affixes: they denote some prop erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
This representation gives ir, an appropriate morphological decomposition, pre serving information that would be lost by simply listing ir, as an unanalyzed form.
|
There is no global pruning.
| 0 |
The effect of the pruning threshold t0 is shown in Table 5.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
-1 means that an NP should be ruled out as a possible antecedent, and 0 means that the knowledge source remains neutral (i.e., it has no reason to believe that they cannot be coreferent).
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
The task can be considered to be one component of the MUC (MUC-6, 1995) named entity task (the other task is that of segmentation, i.e., pulling possible people, places and locations from text before sending them to the classifier).
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
But in most cases they can be used interchangably.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
8 66.4 52.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
We call this technique constituent voting.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
and âHâ represents âHanson Plcâ.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
All our results are obtained by using only the official training data provided by the MUC conferences.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
For sorted lookup, we compare interpolation search, standard C++ binary search, and standard C++ set based on red-black trees.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
In Eq.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.