source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Conditioned on T , features of word types W are drawn. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The particular classifier used depends upon the noun. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | For example, the formalisms in the hierarchy described above generate semilinear languages although their path sets become increasingly more complex as one moves up the hierarchy. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | This paper does not necessarily reflect the position of the U.S. Government. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The judgement of 4 in the first case will go to a vastly better system output than in the second case. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Quantization can be improved by jointly encoding probability and backoff. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | By applying an inverse transformation to the output of the parser, arcs with non-standard labels can be lowered to their proper place in the dependency graph, giving rise 1The dependency graph has been modified to make the final period a dependent of the main verb instead of being a dependent of a special root node for the sentence. to non-projective structures. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | J:j:l :zhongl :0.0 ;m,Jlong4 :0.0 (mHHaryg9tltHBI) £: _ADV: 5.98 ¥ :hua2:o.o E :_NC: 4.41 :mln2:o.o mm : guo2 : 0.0 (RopubllcofChlna) .....,. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The results of this experiment can be seen in Table 5. |
Here we present two algorithms. | 0 | Fort= 1,...,T: |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We insisted that each structure dominates a bounded number of (not necessarily adjacent) substrings. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | All the links in the âCC-domain are shown in Step 4 in subsection 3.2. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | In words, the judgements are normalized, so that the average normalized judgement per judge is 3. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | termined by the category of the word that follows it. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We checked whether the discovered links are listed in WordNet. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | The linear LM (lin lm), TM (lin tm) and MAP TM (map tm) used with non-adapted counterparts perform in all cases slightly worse than the log-linear combination, which adapts both LM and TM components. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Given around 90,000 unlabeled examples, the methods described in this paper classify names with over 91% accuracy. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The PCC is not the result of a funded project. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The approach uses both spelling and contextual rules. |
This corpus has several advantages: it is annotated at different levels. | 0 | The price shocked her.), or same-kind (e.g., Her health insurance paid for the hospital fees, but the automobile insurance did not cover the repair.). |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | During coreference resolution, BABAR checks (1) whether the anaphor is among the lexical expectations for the caseframe that extracts the candidate antecedent, and (2) whether the candidate is among the lexical expectations for the caseframe that extracts the anaphor. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | In addition to using the simple features directly, we also trained an SVM classifier with these features to distinguish between IN and OUT phrase pairs. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | In such cases we assign all of the estimated probability mass to the form with the most likely pronunciation (determined by inspection), and assign a very small probability (a very high cost, arbitrarily chosen to be 40) to all other variants. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | For unaligned words, we set the tag to the most frequent tag in the corresponding treebank. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | An alternate approximation to (8) would be to let w,\(s, t) directly approximate pˆI(s, t). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation). |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The aforementioned surface form bcl, for example, may also stand for the lexical item “onion”, a Noun. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that âreportâ things also âaddâ and âstateâ things; crimes that are âperpetratedâ are often later âcondemnedâ. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | A detailed description of the search procedure used is given in this patent. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009). |
A beam search concept is applied as in speech recognition. | 0 | The Verbmobil task is an appointment scheduling task. |
This corpus has several advantages: it is annotated at different levels. | 0 | Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Participants and other volunteers contributed about 180 hours of labor in the manual evaluation. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | 5 67.3 55. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | This assumption, however, is not inherent to type-based tagging models. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Evaluation Metrics We report three metrics to evaluate tagging performance. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Table 2 shows BABARâs performance. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | All the links in the âCC-domain are shown in Step 4 in subsection 3.2. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:t¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and i¥inl. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003). |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | mein 5. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | (4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | In order to pass these constraints onto the parser, the lexical rules in the grammar are of the form pi —* (si, pi) Parameter Estimation The grammar probabilities are estimated from the corpus using simple relative frequency estimates. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For verbs we add two features. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | We can make several observations on the cause of errors. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | These packages are further described in Section 3. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 8 66.4 52. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | By contrast, when we turn to a comparison of the three encoding schemes it is hard to find any significant differences, and the overall impression is that it makes little or no difference which encoding scheme is used, as long as there is some indication of which words are assigned their linear head instead of their syntactic head by the projective parser. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Evaluation Metrics We report three metrics to evaluate tagging performance. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Time includes all queries but excludes random number generation and data structure population. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | From the discussion so far it is clear that a number of formalisms involve some type of context-free rewriting (they have derivation trees that are local sets). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The derivation trees of a MCTAG are similar to those of a TAG. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The development of the very first Hebrew Treebank (Sima’an et al., 2001) called for the exploration of general statistical parsing methods, but the application was at first limited. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not ac tually tag the words as belonging to one or another class of expression. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Once we figure out the important word (e.g. keyword), we believe we can capture the meaning of the phrase by the keyword. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences. |
Here we present two algorithms. | 0 | In the next section we present an alternative approach that builds two classifiers while attempting to satisfy the above constraints as much as possible. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Each learner is free to pick the labels for these instances. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We propose a limit of 70 words for Arabic parsing evaluations. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | In such cases we use the non-pruned lattice including all (possibly ungrammatical) segmentation, and let the statistics (including OOV) decide. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | (1) CEO of McCann . . . |
They found replacing it with a ranked evaluation to be more suitable. | 0 | This actually happens quite frequently (more below), so that the rankings are broad estimates. |
This assumption, however, is not inherent to type-based tagging models. | 0 | With the exception of the Dutch data set, no other processing is performed on the annotated tags. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | (2010)âs richest model: optimized via either EM or LBFGS, as their relative performance depends on the language. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | We received submissions from 14 groups from 11 institutions, as listed in Figure 2. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.