source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The normalization factor plays an important role in the AdaBoost algorithm.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Although this is not a precise criterion, most cases we evaluated were relatively clear-cut.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Language models that contain wi must also contain prefixes wi for 1 G i G k. Therefore, when the model is queried for p(wnjwn−1 1 ) but the longest matching suffix is wnf , it may return state s(wn1) = wnf since no longer context will be found.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
was done by the participants.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).
Their results show that their high performance NER use less training data than other systems.
0
Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Although matching is done at the sentence level, this information is subsequently discarded when all matches are pooled.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Since Daneˇs’ proposals of ‘thematic development patterns’, a few suggestions have been made as to the existence of a level of discourse structure that would predict the information structure of sentences within texts.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
52 77.
Here we present two algorithms.
0
Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
These clusters are computed using an SVD variant without relying on transitional structure.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Maamouri et al.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Domain adaptation is a common concern when optimizing empirical NLP applications.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We used the TF/ITF metric to identify keywords.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
When a company buys another company, a paying event can occur, but these two phrases do not indicate the same event.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
In this subsection, we will report the results of the experiment, in terms of the number of words, phrases or clusters.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Since the lattice L for a given sentence W is determined by the morphological analyzer M we have which is precisely the formula corresponding to the so-called lattice parsing familiar from speech recognition.
This assumption, however, is not inherent to type-based tagging models.
0
The system of Berg-Kirkpatrick et al.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 — — — 0 . 8 0 9 0.839 335 0 . 7 9
They have made use of local and global features to deal with the instances of same token in a document.
0
In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.
A beam search concept is applied as in speech recognition.
0
There are 13 types of extensions needed to describe the verbgroup reordering.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Sie.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
This flexibility, along with the simplicity of implementation and expansion, makes this framework an attractive base for continued research.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
It is important to bear in mind, though, that this is not an inherent limitation of the model.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
We call this pseudoprojective dependency parsing, since it is based on a notion of pseudo-projectivity (Kahane et al., 1998).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The domain is general politics, economics and science.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Again, this deserves further investigation.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
As was explained in the results section, “strength” or “add” are not desirable keywords in the CC-domain.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
• Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
shortest match at each point.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
1
This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
In this way we restrict the parameterization of a Language Original case English Danish Dutch German Spanish Swedish Portuguese 94.6 96.3 96.6 95.5 95.4 93.3 95.6 Table 1: Upper bound on tagging accuracy assuming each word type is assigned to majority POS tag.
These clusters are computed using an SVD variant without relying on transitional structure.
0
i=1 (f,v)∈Wi
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
How do additional ambiguities caused by devocalization affect statistical learning?
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
An example is in (i), where the system fails to group t;,f;?"$?t!: lin2yang2gang3 as a name, because all three hanzi can in principle be separate words (t;,f; lin2 'wood';?"$ yang2 'ocean'; ?t!; gang3 'harbor').
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Limitations There are several limitations in the methods.
They have made use of local and global features to deal with the instances of same token in a document.
0
A Person-Prefix-List is compiled in an analogous way.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Lexicalizing several POS tags improves performance.
A beam search concept is applied as in speech recognition.
0
The translation direction is from German to English.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The distinctions in the ATB are linguistically justified, but complicate parsing.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The F- measure score increased for both domains, reflecting a substantial increase in recall with a small decrease in precision.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Using RandLM and the documented settings (8-bit values and 1 256 false-positive probability), we built a stupid backoff model on the same data as in Section 5.2.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ).
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
7 Acknowledgements.
This paper talks about Pseudo-Projective Dependency Parsing.
0
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
3 Throughout this paper we shall give Chinese examples in traditional orthography, followed.
Here we present two algorithms.
0
Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Lexicon Feature: The string of the token is used as a feature.
There is no global pruning.
0
The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., “he” vs. “John F. Kennedy”).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The judgements tend to be done more in form of a ranking of the different systems.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The ATB gives several different analyses to these words to indicate different types of coordination.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The results of this experiment can be seen in Table 5.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
This problem arises because our keywords consist of only one word.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
63 81.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
input token, the segmentation is then performed deterministically given the 1-best analysis.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
These results are promising and there are several avenues for improving on these results.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
For each extension a new position is added to the coverage set.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For example, a person’s full name will match with just their last name (e.g., “George Bush” and “Bush”), and a company name will match with and without a corporate suffix (e.g., “IBM Corp.” and “IBM”).
Their results show that their high performance NER use less training data than other systems.
0
We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
As noted, this sentence consists of four words, namely B X ri4wen2 'Japanese,' :Â¥, zhanglyu2 'octopus/ :&P:l zen3me0 'how,' and IDt shuol 'say.'
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
!!:\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 • 10.03 13...
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Bikel et al.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Models that employ this strategy are denoted hsp.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We refer to a segment and its assigned PoS tag as a lexeme, and so analyses are in fact sequences of lexemes.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
This is a standard adaptation problem for SMT.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
In the second scheme, Head+Path, we in addition modify the label of every arc along the lifting path from the syntactic to the linear head so that if the original label is p the new label is p↓.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Human judges also pointed out difficulties with the evaluation of long sentences.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real difference— or similarity—between treebanks.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In the numerator, however, the counts of ni1s are quite irregular, in­ cluding several zeros (e.g., RAT, none of whose members were seen).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Among these 32 sets, we found the following pairs of sets which have two or more links.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We concentrate on those sets.
This assumption, however, is not inherent to type-based tagging models.
0
This design does not guarantee “structural zeros,” but biases towards sparsity.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The resulting algorithm is depicted in Table 1.