source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This corpus has several advantages: it is annotated at different levels.
0
Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at one’s disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
.., Tn ) T W φ E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS ψ : Lexicon parameters θ : Token word emission parameters φ : Token tag transition parameters φ φ t1 t2 θ θ w1 w2 K φ T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ 􀀀L; ; Jg.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
A generative model was applied (similar to naive Bayes) with the three labels as hidden vanables on unlabeled examples, and observed variables on (seed) labeled examples.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
However, there will remain a large number of words that are not readily adduced to any produc­ tive pattern and that would simply have to be added to the dictionary.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The first point we need to address is what type of linguistic object a hanzi repre­ sents.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Table 3 shows BABAR’s performance when the four contextual role knowledge sources are added.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The final estimating equation is then: (3) Since the total of all these class estimates was about 10% off from the Turing estimate n1/N for the probability of all unseen hanzi, we renormalized the estimates so that they would sum to n 1jN.
Here we present two algorithms.
0
The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).
They focused on phrases which two Named Entities, and proceed in two stages.
0
As you can see in the figure, the accuracy for the domain is quite high except for the “agree” set, which contains various expressions representing different relationships for an IE application.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
However, there is a crucial difference: the morphological probabilities in their model come from discriminative models based on linear context.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Current state-of-the-art broad-coverage parsers assume a direct correspondence between the lexical items ingrained in the proposed syntactic analyses (the yields of syntactic parse-trees) and the spacedelimited tokens (henceforth, ‘tokens’) that constitute the unanalyzed surface forms (utterances).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
For example, from the sentence “Mr.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In this section, we brie y review our translation approach.
The AdaBoost algorithm was developed for supervised learning.
0
The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Many packages perform language model queries.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The 2nd block contains the IR system, which was tuned by selecting text in multiples of the size of the EMEA training corpus, according to dev set performance.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Since guess and gold trees may now have different yields, the question of evaluation is complex.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
However their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our case- frame network.(Niyu et al., 1998) used unsupervised learning to ac quire gender, number, and animacy information from resolutions produced by a statistical pronoun resolver.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
If is one of Monday, Tuesday, . . .
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
On the other hand, we can expect Head+Path to be the most useful representation for reconstructing the underlying non-projective dependency graph.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
A dynamic programming recursion similar to the one in Eq. 2 is evaluated.
The corpus was annoted with different linguitic information.
0
Either save money at any cost - or give priority to education.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Statistical methods seem particularly applicable to the problem of unknown-word identification, especially for constructions like names, where the linguistic constraints are minimal, and where one therefore wants to know not only that a particular se­ quence of hanzi might be a name, but that it is likely to be a name with some probabil­ ity.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In the second scheme, Head+Path, we in addition modify the label of every arc along the lifting path from the syntactic to the linear head so that if the original label is p the new label is p↓.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.
They found replacing it with a ranked evaluation to be more suitable.
0
The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Here, all tokeninternal collocations of tags unseen in our training data are pruned away.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
In this section, we describe how contextual role knowledge is represented and learned.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We first make use of our morphological analyzer to find all segmentation possibilities by chopping off all prefix sequence possibilities (including the empty prefix) and construct a lattice off of them.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, the company may refer to Company X in one paragraph and Company Y in another.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In the following, we use the notation wi wj to mean that (wi, r, wj) E A; r we also use wi wj to denote an arc with unspecified label and wi —*∗ wj for the reflexive and transitive closure of the (unlabeled) arc relation.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Memory-based classifiers for the experiments were created using TiMBL (Daelemans et al., 2003).
The AdaBoost algorithm was developed for supervised learning.
0
In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
A promising direction for future work is to explicitly model a distribution over tags for each word type.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Performance typically stabilizes across languages after only a few number of iterations.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
7).
Replacing this with a ranked evaluation seems to be more suitable.
0
The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
9 61.0 44.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
1
Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The relevance of the distinction between, say, phonological words and, say, dictionary words is shown by an example like rpftl_A :;!:Hfllil zhong1hua2 ren2min2 gong4he2-guo2 (China people republic) 'People's Republic of China.'
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Training and testing is based on the Europarl corpus.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Typical data structures are generalized Bloom filters that guarantee a customizable probability of returning the correct answer.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We briefly review it here for completeness.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Surprisingly, this effect is much less obvious for out-of-domain test data.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
So, this was a surprise element due to practical reasons, not malice.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The techniques we develop can be extended in a relatively straightforward manner to the more general case when OUT consists of multiple sub-domains.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
KenLM: Faster and Smaller Language Model Queries
This paper conducted research in the area of automatic paraphrase discovery.
0
This limitation is the obstacle to making the technology “open domain”.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
”, because Bush is the first word, the initial caps might be due to its position (as in “They put a freeze on . . .
These clusters are computed using an SVD variant without relying on transitional structure.
0
Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
hanzi in the various name positions, derived from a million names.
This paper conducted research in the area of automatic paraphrase discovery.
0
The most frequent NE category pairs are “Person - Person (209,236), followed by “Country - Coun- try” (95,123) and “Person - Country” (75,509).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.
Replacing this with a ranked evaluation seems to be more suitable.
0
• We evaluated translation from English, in addition to into English.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
However, the characterization given in the main body of the text is correct sufficiently often to be useful.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
1
In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
For example, we might have VP → VB NP PP, where the NP is the subject.
There are clustering approaches that assign a single POS tag to each word type.
0
For inference, we are interested in the posterior probability over the latent variables in our model.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
We do not experiment with models larger than physical memory in this paper because TPT is unreleased, factors such as disk speed are hard to replicate, and in such situations we recommend switching to a more compact representation, such as RandLM.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English.
Here we present two algorithms.
0
AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
), which precludes a single universal approach to adaptation.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Keyword detection error Even if a keyword consists of a single word, there are words which are not desirable as keywords for a domain.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.
It is probably the first analysis of Arabic parsing of this kind.
0
markBaseNP indicates these non-recursive nominal phrases.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Many human evaluation metrics have been proposed.
This corpus has several advantages: it is annotated at different levels.
0
A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, “flowing” way rather than with abrupt shifts of attention.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The terms on the right-hand-side denote the type-level and token-level probability terms respectively.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
0 68.4 48.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
However, it is robust, efficient, and easy to implement.4 To perform the maximization in (7), we used the popular L-BFGS algorithm (Liu and Nocedal, 1989), which requires gradient information.