source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | An interesting observation is that while vertical markovization benefits all our models, its effect is less evident in Cohen and Smith. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | besuchen 9. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge). |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | LCFRS's have only been loosely defined in this paper; we have yet to provide a complete set of formal properties associated with members of this class. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 7 Unlike Dickinson (2005), we strip traces and only con-. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | They are also labelled for their topicality (yes / no), and this annotation is accompanied by a confidence value assigned by the annotator (since it is a more subjective matter). |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | The linear LM (lin lm), TM (lin tm) and MAP TM (map tm) used with non-adapted counterparts perform in all cases slightly worse than the log-linear combination, which adapts both LM and TM components. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ôL; ; Jg. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | MITLM 0.4 (Hsu and Glass, 2008) is mostly designed for accurate model estimation, but can also compute perplexity. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The possible analyses of a surface token pose constraints on the analyses of specific segments. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The other half was replaced by other participants, so we ended up with roughly the same number. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In this section we present a partial evaluation of the current system, in three parts. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | On the MUC6 data, Bikel et al. |
All the texts were annotated by two people. | 0 | rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In recent years, coreference resolvers have been evaluated as part of MUC6 and MUC7 (MUC7 Proceedings, 1998). |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Arguably this consists of about three phonological words. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | ), and thosethat begin with a verb (� ub..i �u _.. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | For example, in the sentence that starts with âBush put a freeze on . . . |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 3.1 Lexicon Component. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 5 64.7 42. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Our results show that BABAR achieves good performance in both domains, and that the contextual role knowledge improves performance, especially on pronouns. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | (1998) did make use of information from the whole document. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | We identified three ways that contextual roles can be exploited: (1) by identifying caseframes that co-occur in resolutions, (2) by identifying nouns that co-occur with case- frames and using them to crosscheck anaphor/candidate compatibility, (3) by identifying semantic classes that co- occur with caseframes and using them to crosscheck anaphor/candidate compatability. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available. |
The corpus was annoted with different linguitic information. | 0 | In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 5 60.6 Table 3: Multilingual Results: We report token-level one-to-one and many-to-one accuracy on a variety of languages under several experimental settings (Section 5). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | We use a patched version of BitPar allowing for direct input of probabilities instead of counts. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Fourth, we show how to build better models for three different parsers. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Initially, we planned to compare the semantic classes of an anaphor and a candidate and infer that they might be coreferent if their semantic classes intersected. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) ⤠20 41.9% 33.7% ⤠40 92.4% 73.2% ⤠63 99.7% 92.6% ⤠70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2â23) and the ATB (p1â3). |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | We obtained positive results using a very simple phrase-based system in two different adaptation settings: using English/French Europarl to improve a performance on a small, specialized medical domain; and using non-news portions of the NIST09 training material to improve performance on the news-related corpora. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Supervised methods have been applied quite successfully to the full MUC named-entity task (Bikel et al. 97). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are "popular" or not. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models’ performance against previously reported results. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | One key issue here is to seek a discourse-based model of information structure. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Table 4 shows the results. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Twentieth-century linguistic work on Chinese (Chao 1968; Li and Thompson 1981; Tang 1988,1989, inter alia) has revealed the incorrectness of this traditional view. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Since the parameter and token components will remain fixed throughout experiments, we briefly describe each. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | RandLM is the clear winner in RAM utilization, but is also slower and lower quality. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | lla/llb and 14a/14b respectively). |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | However, our full model takes advantage of word features not present in Grac¸a et al. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The method being described-henceforth ST.. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | In this paper we have proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Fall. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.' |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence. |
A beam search concept is applied as in speech recognition. | 0 | Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.) |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | Still, from a theoretical point of view, projective parsing of non-projective structures has the drawback that it rules out perfect accuracy even as an asymptotic goal. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Annotators suggested that long sentences are almost impossible to judge. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Applications The discovered paraphrases have multiple applications. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Like CFG's, the choice is predetermined by a finite number of rules encapsulated in the grammar. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | At the phrasal level, we remove all function tags and traces. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | vierten 12. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 1 | The structure uses linear probing hash tables and is designed for speed. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The proof is given in (Tillmann, 2000). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples. |
Here we present two algorithms. | 0 | To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Thus, as in CFG's, at any point in the derivation, the set of structures that can be applied is determined only by a finite set of rules encapsulated by the grammar. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | These enable much larger models in memory, compensating for lost accuracy. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The developers aimed to reduce memory consumption at the expense of time. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | handled given appropriate models. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | 3.1 Word ReOrdering with Verbgroup. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The inventions disclosed herein are the subject of a patent application owned by the University of Utah and licensed on an exclusive basis to Attensity Corporation. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | (1992). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | (Thus the domain of the dev and test corpora matches IN.) |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | computing the precision of the other's judgments relative to this standard. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf. |
This assumption, however, is not inherent to type-based tagging models. | 0 | A promising direction for future work is to explicitly model a distribution over tags for each word type. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | For example, if is found in the list of person first names, the feature PersonFirstName is set to 1. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.