source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
3 54.4 33.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
In order to create an IE system for a new domain, one has to spend a long time to create the knowledge.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
[Hasegawa et al. 04] reported only on relation discovery, but one could easily acquire para phrases from the results.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
gaolxing4 'happy'=> F.i'JF.i'JJI!JI!
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
In the denomi 11 We have two such lists, one containing about 17,000 full names, and another containing frequencies of.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
In a few cases, the criteria for correctness are made more explicit.
|
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
| 0 |
Input: Ja , wunderbar . Konnen wir machen . MonS: Yes, wonderful.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
The first four affixes are so-called resultative affixes: they denote some prop erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
For example, in the CC-domain, 96 keywords are found which have TF/ITF scores above a threshold; some of them are shown in Figure 3.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
na me =>2 ha nzi fa mi ly 1 ha nzi gi ve n 6.1 ha nzi fa mi ly => ha nz ii 7.2 ha nzi fa mi ly => ha nzi i ha nz ij 8.1 ha nzi gi ve n => ha nz ii 9.2 ha nzi giv en => ha nzi i ha nz ij The difficulty is that given names can consist, in principle, of any hanzi or pair of hanzi, so the possible given names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others.
|
The AdaBoost algorithm was developed for supervised learning.
| 0 |
(4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
However, there is a crucial difference: the morphological probabilities in their model come from discriminative models based on linear context.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â â â 0 . 8 0 9 0.839 335 0 . 7 9
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
To date we have not done a separate evaluation of foreign-name recognition.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
We also mark all nodes that dominate an SVO configuration (containsSVO).
|
Here both parametric and non-parametric models are explored.
| 0 |
Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
The second stage links sets which involve the same pairs of individual NEs.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
For each pair we also record the context, i.e. the phrase between the two NEs (Step1).
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Each feature group can be made up of many binary features.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
10 Here we use the Good-Turing estimate (Baayen 1989; Church and Gale 1991), whereby the aggregate probability of previously unseen instances of a construction is estimated as ni/N, where N is the total number of observed tokens and n1 is the number of types observed only once.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
We also removed the context-type feature type when using the CoBoost approach.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Simply assigning to each word its most frequent associated tag in a corpus achieves 94.6% accuracy on the WSJ portion of the Penn Treebank.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 1 |
We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
The features are used to represent each example for the learning algorithm.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
None of the models attach the attributive adjectives correctly.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
The corpus was wordaligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
Hence, we use the bootstrap resampling method described by Koehn (2004).
|
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
| 0 |
It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
4 53.7 43.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
In both cases the investigators were able to achieve significant improvements over the previous best tagging results.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
This representation of structures by substrings and the composition operation by its effect on substrings is related to the work of Rounds (1985).
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Figure 1: Translation of PCC sample commentary (STTS)2.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
In fact, during the first rounds many of the predictions of Th., g2 are zero.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
This is not completely surprising, since all systems use very similar technology.
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
Sometimes extracted phrases by themselves are not meaningful to consider without context, but we set the following criteria.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase.
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
The correct resolution in sentence (c) depends on knowledge that kidnappers frequently blindfold their victims.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
We have argued that the proposed method performs well.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
For example, kidnapping victims should be extracted from the subject of the verb âkidnappedâ when it occurs in the passive voice (the shorthand representation of this pattern would be â<subject> were kidnappedâ).
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
We did have a problem classifying some modified noun phrases where the modified phrase does not represent a qualified or restricted form of the head, like âchairmanâ and âvice chairmanâ, as these are both represented by the keyword âchairmanâ.
|
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
| 0 |
This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).
|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
| 0 |
If two phrases can be used to express the same relationship within an information extraction application (âscenarioâ), these two phrases are paraphrases.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
so that 'door' would be and in this case the hanzi 7C, does not represent a syllable.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
The developer explained that the loading process requires extra memory that it then frees. eBased on the ratio to SRI’s speed reported in Guthrie and Hepple (2010) under different conditions.
|
A beam search concept is applied as in speech recognition.
| 0 |
The perplexity for the trigram language model used is 26:5.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
For pronouns, however, all of the knowledge sources increased recall, often substantially, and with little if any decrease in precision.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Another attempt at using global information can be found in (Borthwick, 1999).
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
Here, âEGâ represents âEastern Group Plcâ.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
1 2 3.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
4 70.4 46.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô 1g n fl1; l2; l3g ; l0) !
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
There has also been work using a bootstrap- ping approach [Brin 98; Agichtein and Gravano 00; Ravichandran and Hovy 02].
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Two issues distinguish the various proposals.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
We are not claiming that this method is almighty.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English.
|
The texts were annotated with the RSTtool.
| 0 |
We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
35 76.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010).
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
The out-of-domain test set differs from the Europarl data in various ways.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
Sorted arrays store key-value pairs in an array sorted by key, incurring no space overhead.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
(If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.
|
All the texts were annotated by two people.
| 0 |
2.5 Connectives with scopes.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
So, there is a limitation that IE can only be performed for a predefined task, like âcorporate mergersâ or âmanagement successionâ.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
Both parameters depend on a single hyperparameter α.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
The NE tagger is a rule-based system with 140 NE categories [Sekine et al. 2004].
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at oneâs disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
We received submissions from 14 groups from 11 institutions, as listed in Figure 2.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
2 62.2 39.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
âNP NP PP R) and âNP NP ADJP R) are both iDafa attachment.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases).
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
In speech recognition the arcs of the lattice are typically weighted in order to indicate the probability of specific transitions.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
73 81.
|
Here both parametric and non-parametric models are explored.
| 0 |
It was then tested on section 22 of the Treebank in conjunction with the other parsers.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
To date we have not done a separate evaluation of foreign-name recognition.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.