source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This assumption, however, is not inherent to type-based tagging models. | 0 | Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # â , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ â . +F EA TS JJ NN S CD NN P UH , PR P$ # . â Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | This process is repeated 5 times by rotating the data appropriately. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | The P (T |Ï) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | For example, from the sentence âMr. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | This variant is tested in the experiments below. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The human judges were presented with the following definition of adequacy and fluency, but no additional instructions: |
Their results show that their high performance NER use less training data than other systems. | 0 | In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Jiang and Zhai (2007) suggest the following derivation, making use of the true OUT distribution po(s, t): where each fi(s, t) is a feature intended to charac- !0ˆ = argmax pf(s, t) log pθ(s|t) (8) terize the usefulness of (s, t), weighted by Ai. θ s,t pf(s, t)po(s, t) log pθ(s|t) The mixing parameters and feature weights (col- != argmax po (s, t) lectively 0) are optimized simultaneously using dev- θ s,t pf(s, t)co(s, t) log pθ(s|t), set maximum likelihood as before: !�argmax po (s, t) ! θ s,t �ˆ = argmax ˜p(s, t) log p(s|t; 0). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The approach has been successfully tested on the 8 000-word Verbmobil task. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 5 67.3 55. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Other approaches encode sparsity as a soft constraint. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Different sentence structure and rich target language morphology are two reasons for this. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Our second point of comparison is with Grac¸a et al. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 6 Results and Analysis. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | f;g denotes the empty set, where no source sentence position is covered. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | For each domain, phrases which contain the same keyword are gathered to build a set of phrases (Step 3). |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | (Other classes handled by the current system are discussed in Section 5.) |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | This revealed interesting clues about the properties of automatic and manual scoring. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998). |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | buy - acquire (5) buy - agree (2) buy - purchase (5) buy - acquisition (7) buy - pay (2)* buy - buyout (3) buy - bid (2) acquire - purchase (2) acquire - acquisition (2) acquire - pay (2)* purchase - acquisition (4) purchase - stake (2)* acquisition - stake (2)* unit - subsidiary (2) unit - parent (5) It is clear that these links form two clusters which are mostly correct. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The techniques we develop can be extended in a relatively straightforward manner to the more general case when OUT consists of multiple sub-domains. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | A few annotators suggested to break up long sentences into clauses and evaluate these separately. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | As the two NE categories are the same, we canât differentiate phrases with different orders of par ticipants â whether the buying company or the to-be-bought company comes first. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Thus, the tree sets generated by HG's are similar to those of CFG's, with each node annotated by the operation (concatenation or wrapping) used to combine the headed strings derived by the daughters of Tree Adjoining Grammars, a tree rewriting formalism, was introduced by Joshi, Levy and Takahashi (1975) and Joshi (1983/85). |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Here, we process only full-form words within the translation procedure. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | In Semitic languages the situation is very different. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | RandLM is the clear winner in RAM utilization, but is also slower and lower quality. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages. |
All the texts were annotated by two people. | 0 | The Potsdam Commentary Corpus |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | Table 3 contains the results for evaluating our systems on the test set (section 22). |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | (f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) ! |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | For example, the phrase â's New York-based trust unit,â is not a paraphrase of the other phrases in the âunitâ set. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | In this paper we study the problem of using a parallel corpus from a background domain (OUT) to improve performance on a target domain (IN) for which a smaller amount of parallel training material—though adequate for reasonable performance—is also available. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Approaches differ in the algorithms used for scoring and selecting the best path, as well as in the amount of contextual information used in the scoring process. |
The AdaBoost algorithm was developed for supervised learning. | 0 | ). context=x The context for the entity. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | We attain these results using several optimizations: hashing, custom lookup tables, bit-level packing, and state for left-to-right query patterns. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima’an, 2007; Cohen and Smith, 2007). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | 10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Yarowsky-cautious does not separate the spelling and contextual features, but does have a limit on the number of rules added at each stage. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graphâs structures â cf. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | It can be shown that this objective is convex in q. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | so that 'door' would be and in this case the hanzi 7C, does not represent a syllable. |
All the texts were annotated by two people. | 0 | Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, âflowingâ way rather than with abrupt shifts of attention. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned. |
There is no global pruning. | 0 | For the translation experiments, Eq. 2 is recursively evaluated. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | These clusters are computed using an SVD variant without relying on transitional structure. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | A detailed description of the search procedure used is given in this patent. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | com §Cambridge, UK Email: [email protected] © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | For example, the two NEs âEastern Group Plcâ and âHanson Plcâ have the following contexts. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | In the numerator, however, the counts of ni1s are quite irregular, in cluding several zeros (e.g., RAT, none of whose members were seen). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | A different knowledge source, called CFSemCFSem, compares the semantic expectations of the caseframe that extracts the anaphor with the semantic expectations of the caseframe that extracts the candidate. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The sentence length probability p(JjI) is omitted without any loss in performance. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | In this paper we have argued that Chinese word segmentation can be modeled ef fectively using weighted finite-state transducers. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Our evaluation includes both weighted and un- weighted lattices. |
Their results show that their high performance NER use less training data than other systems. | 0 | The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The text type are editorials instead of speech transcripts. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 5 68.1 34. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | (3)In sentence (1), McCann can be a person or an orga nization. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | This is orthographically represented as 7C. |
Their results show that their high performance NER use less training data than other systems. | 0 | If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | The effect of UNIQ is very small on both data sets. |
A beam search concept is applied as in speech recognition. | 0 | This approach leads to a search procedure with complexity O(E3 J4). |
Here we present two algorithms. | 0 | For example, take ..., says Maury Cooper, a vice president at S.&P. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We evaluated the results based on two metrics. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The relativizer f(“that”) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | We elected run Moses single-threaded to minimize the impact of RandLM’s cache on memory use. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The algorithm builds two classifiers in parallel from labeled and unlabeled data. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The segmentation chosen is the best path through the WFST, shown in (d). |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Matsoukas et al (2009) generalize it by learning weights on sentence pairs that are used when estimating relative-frequency phrase-pair probabilities. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | ). context=x The context for the entity. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | ). context=x The context for the entity. |