source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
0 55.3 34.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
For example, in Information Retrieval (IR), we have to match a user’s query to the expressions in the desired documents, while in Question Answering (QA), we have to find the answer to the user’s question even if the formulation of the answer in the document is different from the question.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
An alternate approximation to (8) would be to let w,\(s, t) directly approximate pˆI(s, t).
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.
Here we present two algorithms.
0
In this case, Maury Cooper is extracted.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Subsets C of increasing cardinality c are processed.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Full Chinese personal names are in one respect simple: they are always of the form family+given.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
More formally, we start by representing the dictionary D as a Weighted Finite State Trans­ ducer (WFST) (Pereira, Riley, and Sproat 1994).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
1 1 0.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We call this approach parse hybridization.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We loosely describe the class of all such systems as Linear Context-Free Rewriting Formalisms.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
In contrast to results reported in Johnson (2007), we found that the per P (Ti|T −i, β) n (f,v)∈Wi P (v|Ti, f, W −i, T −i, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
For speed, we plan to implement the direct-mapped cache from BerkeleyLM.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
All four of the techniques studied result in parsing systems that perform better than any previously reported.
Here we present two algorithms.
0
Our derivation is slightly different from the one presented in (Schapire and Singer 98) as we restrict at to be positive.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).
This assumption, however, is not inherent to type-based tagging models.
0
36.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ 􀀀L; ; Jg.
There is no global pruning.
0
the number of permutations carried out for the word reordering is given.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are "popular" or not.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
[Hasegawa et al. 04] reported only on relation discovery, but one could easily acquire para phrases from the results.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The resulting algorithm has a complexity of O(n!).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
KS Function Ge nde r filters candidate if gender doesn’t agree.
They have made use of local and global features to deal with the instances of same token in a document.
0
Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
We evaluate our model on seven languages exhibiting substantial syntactic variation.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.
There is no global pruning.
0
For the error counts, a range from 0:0 to 1:0 is used.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
At each point during the derivation, the parser has a choice between pushing the next input token onto the stack – with or without adding an arc from the token on top of the stack to the token pushed – and popping a token from the stack – with or without adding an arc from the next input token to the token popped.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
This corpus has several advantages: it is annotated at different levels.
0
Figure 1: Translation of PCC sample commentary (STTS)2.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
At each point during the derivation, the prediction is based on six word tokens, the two topmost tokens on the stack, and the next four input tokens.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
In Eq.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Learned Tag Prior (PRIOR) We next assume there exists a single prior distribution ψ over tag assignments drawn from DIRICHLET(β, K ).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
markBaseNP indicates these non-recursive nominal phrases.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
But we also need an estimate of the probability for a non-occurring though possible plural form like i¥JJ1l.f, nan2gua1-men0 'pumpkins.'
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
And indeed, converging on annotation guidelines is even more difficult than it is with co-reference.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
The linear LM (lin lm), TM (lin tm) and MAP TM (map tm) used with non-adapted counterparts perform in all cases slightly worse than the log-linear combination, which adapts both LM and TM components.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
In our case multi-threading is trivial because our data structures are read-only and uncached.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Table 4 Differences in performance between our system and Wang, Li, and Chang (1992).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Morphologically derived words such as, xue2shengl+men0.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
a classifier.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, they list two sets, one consisting of 28 fragments and the other of 22 fragments, in which they had 0% recall and precision.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This is less effective in our setting, where IN and OUT are disparate.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Besides size of training data, the use of dictionaries is another factor that might affect performance.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
As expected, the most informative encoding, Head+Path, gives the highest accuracy with over 99% of all non-projective arcs being recovered correctly in both data sets.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).
This topic has been getting more attention, driven by the needs of various NLP applications.
0
x EG, has agreed to be bought by H x EG, now owned by H x H to acquire EG x H’s agreement to buy EG Three of those phrases are actually paraphrases, but sometime there could be some noise; such as the second phrase above.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
This measure has the advantage of being completely automatic.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The following context-free production captures the derivation step of the grammar shown in Figure 7, in which the trees in the auxiliary tree set are adjoined into themselves at the root node (address c).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The frequency of the Company – Company domain ranks 11th with 35,567 examples.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In such cases we assign all of the estimated probability mass to the form with the most likely pronunciation (determined by inspection), and assign a very small probability (a very high cost, arbitrarily chosen to be 40) to all other variants.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
For each 13 Of course, this weighting makes the PCFG an improper distribution.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
In contrast to these approaches, our method directly incorporates these constraints into the structure of the model.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
The function A : F —* C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −oc when tx(y) = 0 and constrains the HMM’s state space.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Prague Dependency Treebank (Hajiˇc et al., 2001b), Danish Dependency Treebank (Kromann, 2003), and the METU Treebank of Turkish (Oflazer et al., 2003), which generally allow annotations with nonprojective dependency structures.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
123 examples fell into the noise category.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The Potsdam Commentary Corpus
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
English parsing evaluations usually report results on sentences up to length 40.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
We are currently exploring such algorithms.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The judgement of 4 in the first case will go to a vastly better system output than in the second case.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
We computed BLEU scores for each submission with a single reference translation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The first concerns how to deal with ambiguities in segmentation.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Table 5: Effect of the beam threshold on the number of search errors (147 sentences).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
A totally non­ stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
See Figure 3 for a screenshot of the evaluation tool.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The idea of distinguishing between general and domain-specific examples is due to Daum´e and Marcu (2006), who used a maximum-entropy model with latent variables to capture the degree of specificity.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
£ : _ADV: 5.88 If:!
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
While the proportion of sentences containing non-projective dependencies is often 15–25%, the total proportion of non-projective arcs is normally only 1–2%.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
gaolbu4-gaolxing4 (hap-not-happy) 'happy?'
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Lexicon Feature: The string of the token is used as a feature.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
We apply a beam search concept as in speech recognition.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
One of the strengths of the DempsterShafer model is its natural ability to recognize when several credible hypotheses are still in play.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
This feature has a linguistic justification.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
7 Unlike Dickinson (2005), we strip traces and only con-.
There are clustering approaches that assign a single POS tag to each word type.
0
The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The first step in the learning process is to generate training examples consisting of anaphor/antecedent resolutions.
All the texts were annotated by two people.
0
2.3 Rhetorical structure.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Table 2 shows these similarity measures.
The corpus was annoted with different linguitic information.
0
2.6 Co-reference.
A beam search concept is applied as in speech recognition.
0
f;g denotes the empty set, where no source sentence position is covered.