source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | All commentaries have been tagged with part-of-speech information using Brantsâ TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.) |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | This variant is tested in the experiments below. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | It is natural that the larger the data in the domain, the more keywords are found. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | It then computes a normalized Levenshtein edit distance between the extracted chain and the reference. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | With the exception of the Dutch data set, no other processing is performed on the annotated tags. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | The paper is structured as follows. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | However, this result is consistent with the results of ex periments discussed in Wu and Fung (1994). |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Different sentence structure and rich target language morphology are two reasons for this. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | For this reason, naïve Bayes classifiers are well-matched to this problem. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 2 62.6 45. |
This corpus has several advantages: it is annotated at different levels. | 0 | Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase. |
Here both parametric and non-parametric models are explored. | 0 | While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Ex: The regime gives itself the right... |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | For instance, the sentence Similar improvements in haemoglobin levels were reported in the scientific literature for other epoetins would likely be considered domain-specific despite the presence of general phrases like were reported in. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We contrasted formalisms such as CFG's, HG's, TAG's and MCTAG's, with formalisms such as IG's and unificational systems such as LFG's and FUG's. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Though we are not able to calculate their memory usage on our model, results reported in their paper suggest lower memory consumption than TRIE on large-scale models, at the expense of CPU time. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | In sequential tagging models such as (Adler and Elhadad, 2006; Bar-Haim et al., 2007; Smith et al., 2005) weights are assigned according to a language model The input for the joint task is a sequence W = w1, ... , wn of space-delimited tokens. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | For illustration, an English translation of one of the commentaries is given in Figure 1. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain. |
The texts were annotated with the RSTtool. | 0 | 9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | For example, in Information Retrieval (IR), we have to match a userâs query to the expressions in the desired documents, while in Question Answering (QA), we have to find the answer to the userâs question even if the formulation of the answer in the document is different from the question. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 83 77. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 7.96 5.55 1 l...................................................................................................................................................................................................J.. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Recent work by Finkel and Manning (2009) which re-casts Daum´e’s approach in a hierarchical MAP framework may be applicable to this problem. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | As discussed in Section 3, independent paths in tree sets, rather than the path complexity, may be crucial in characterizing semilinearity and polynomial time recognition. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | A different knowledge source, called CFSemCFSem, compares the semantic expectations of the caseframe that extracts the anaphor with the semantic expectations of the caseframe that extracts the candidate. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | While the paper mentioned a sorted variant, code was never released. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations). |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | For example, one parser could be more accurate at predicting noun phrases than the other parsers. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | Although these authors report better gains than ours, they are with respect to a non-adapted baseline. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme. |
A beam search concept is applied as in speech recognition. | 0 | 2. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The attachment in such cases encompasses a long distance dependency that cannot be captured by Markovian processes that are typically used for morphological disambiguation. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Hence, we use the bootstrap resampling method described by Koehn (2004). |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | Other work includes transferring latent topic distributions from source to target language for LM adaptation, (Tam et al., 2007) and adapting features at the sentence level to different categories of sentence (Finch and Sumita, 2008). |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Let s = a + b. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Step 2. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | handled given appropriate models. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths ⤠70 (dev set, gold segmentation). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Mohri [1995]) shows promise for improving this situation. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Members of LCFRS whose operations have this property can be translated into the ILFP notation (Rounds, 1985). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In (b) âtheyâ refers to the kidnapping victims, but in (c) âtheyâ refers to the armed men. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010). |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Since guess and gold trees may now have different yields, the question of evaluation is complex. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | With each iteration more examples are assigned labels by both classifiers, while a high level of agreement (> 94%) is maintained between them. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Two general approaches are presented and two combination techniques are described for each approach. |
This corpus has several advantages: it is annotated at different levels. | 0 | Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | A contextual rule considers words surrounding the string in the sentence in which it appears (e.g., a rule that any proper name modified by an appositive whose head is president is a person). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In many cases these failures in recall would be fixed by having better estimates of the actual prob abilities of single-hanzi words, since our estimates are often inflated. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | A TAG consists of a finite set of elementary trees that are either initial trees or auxiliary trees. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Hence, we use the bootstrap resampling method described by Koehn (2004). |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Given parameter estimates, the label for a test example x is defined as We should note that the model in equation 9 is deficient, in that it assigns greater than zero probability to some feature combinations that are impossible. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | (3)In sentence (1), McCann can be a person or an orga nization. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 98 15. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | For E(ni1s), then, we substitute a smooth S against the number of class elements. |
Their results show that their high performance NER use less training data than other systems. | 0 | In MUC6, the best result is achieved by SRA (Krupka, 1995). |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | A possible probabilistic model for assigning probabilities to complex analyses of a surface form may be and indeed recent sequential disambiguation models for Hebrew (Adler and Elhadad, 2006) and Arabic (Smith et al., 2005) present similar models. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Our initial experimentation with the evaluation tool showed that this is often too overwhelming. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 83 77. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The number of paths that can be dependent is bounded by the grammar (in fact the maximum cardinality of a tree set determines this bound). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | This can be seen as a rough approximation of Yarowsky and Ngai (2001). |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Jiang and Zhai (2007) suggest the following derivation, making use of the true OUT distribution po(s, t): where each fi(s, t) is a feature intended to charac- !0ˆ = argmax pf(s, t) log pθ(s|t) (8) terize the usefulness of (s, t), weighted by Ai. θ s,t pf(s, t)po(s, t) log pθ(s|t) The mixing parameters and feature weights (col- != argmax po (s, t) lectively 0) are optimized simultaneously using dev- θ s,t pf(s, t)co(s, t) log pθ(s|t), set maximum likelihood as before: !�argmax po (s, t) ! θ s,t �ˆ = argmax ˜p(s, t) log p(s|t; 0). |
Here we present two algorithms. | 0 | (If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | This solution also obviates the need to perform word sense disambiguation. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Keyword detection error Even if a keyword consists of a single word, there are words which are not desirable as keywords for a domain. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This is an issue that we have not addressed at the current stage of our research. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Thus our proposed model is a proper model assigning probability mass to all (7r, L) pairs, where 7r is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to. |
A beam search concept is applied as in speech recognition. | 0 | In this section, we brie y review our translation approach. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We have studied the structural descriptions (tree sets) that can be assigned by various grammatical systems, and classified these formalisms on the basis of two features: path complexity; and path independence. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | In our implementation, we make perhaps the simplest choice of weak hypothesis. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 5.2 Discussion. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | When the connective is an adverbial, there is much less clarity as to the range of the spans. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.