source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 2. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | First, we will describe their method and compare it with our method. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We concentrate on those sets. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The 13 positions of the source sentence are processed in the order shown. |
The texts were annotated with the RSTtool. | 0 | The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | 2.2.2 The Caseframe Network The first type of contextual role knowledge that BABAR learns is the Caseframe Network (CFNet), which identifies caseframes that co-occur in anaphor/antecedent resolutions. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | This is the first time that we organized a large-scale manual evaluation. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Applications The discovered paraphrases have multiple applications. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Across all languages, +PRIOR consistently outperforms 1TW, reducing error on average by 9.1% and 5.9% on best and median settings respectively. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The second case involves existential noun phrases (Allen, 1995), which are noun phrases that uniquely specify an object or concept and therefore do not need a prior referent in the discourse. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | We thank members of the MIT NLP group for their suggestions and comments. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Table 2 shows our complete set of results. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In this situation, BABAR takes the conservative approach and declines to make a resolution. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | This is similar to using the Linux MAP POPULATE flag that is our default loading mechanism. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | For à = 1, a new target language word is generated using the trigram language model p(eje0; e00). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | We refer to (T , W ) as the lexicon of a language and Ï for the parameters for their generation; Ï depends on a single hyperparameter β. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | There is a âcore corpusâ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | A multicomponent Tree Adjoining Grammar (MCTAG) consists of a finite set of finite elementary tree sets. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Some approaches depend upon some form of con straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In this section, we describe how contextual role knowledge is represented and learned. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | This is because different judges focused on different language pairs. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | This is most severe with RandLM in the multi-threaded case, where each thread keeps a separate cache, exceeding the original model size. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | We report the F1 value of both measures. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | This process is repeated 5 times by rotating the data appropriately. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Still, from a theoretical point of view, projective parsing of non-projective structures has the drawback that it rules out perfect accuracy even as an asymptotic goal. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | A summary of the corpus used in the experiments is given in Table 3. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Clearly, for judges h and h taking h as standard and computing the precision and recall for Jz yields the same results as taking h as the standard, and computing for h, 14 All evaluation materials, with the exception of those used for evaluating personal names were drawn. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Full Chinese personal names are in one respect simple: they are always of the form family+given. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word. |
Here both parametric and non-parametric models are explored. | 0 | F-measure is the harmonic mean of precision and recall, 2PR/(P + R). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with default pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abilities). |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | handled given appropriate models. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1). |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003). |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â 650,000 â 790,000 MENE â â 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004). |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Oracle results). |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo]. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Since the transducers are built from human-readable descriptions using a lexical toolkit (Sproat 1995), the system is easily maintained and extended. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | The techniques we develop can be extended in a relatively straightforward manner to the more general case when OUT consists of multiple sub-domains. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Decreasing the threshold results in higher mWER due to additional search errors. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | When a company buys another company, a paying event can occur, but these two phrases do not indicate the same event. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy. mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d↑ instead of d↑h (AuxP↑ instead of AuxP↑Sb). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | (a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | As a (crude) approximation, we normalize the extraction patterns with respect to active and passive voice and label those extractions as agents or patients. |
The texts were annotated with the RSTtool. | 0 | The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) ⢠the full text, ⢠the annotation values for the activated annotation set (co-reference), ⢠the actual annotation tiers, and ⢠the portion of text currently âin focusâ (which also appears underlined in the full text). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | For alif with hamza, normalization can be seen as another level of devocalization. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The difference in performance between pronouns and definite noun phrases surprised us. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This is a standard adaptation problem for SMT. |
Their results show that their high performance NER use less training data than other systems. | 0 | This might be because our features are more comprehensive than those used by Borthwick. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The toplevel weights are trained to maximize a metric such as BLEU on a small development set of approximately 1000 sentence pairs. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | na me =>2 ha nzi fa mi ly 2 ha nzi gi ve n 5. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | (2006). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | It is a relatively frequent word in the domain, but it can be used in different extraction scenarios. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 87 Table 7: Test set results. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5). |
A beam search concept is applied as in speech recognition. | 0 | Additionally, it works about 3 times as fast as the IBM style search. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | In this section, we will explain the algorithm step by step with examples. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | A possible probabilistic model for assigning probabilities to complex analyses of a surface form may be and indeed recent sequential disambiguation models for Hebrew (Adler and Elhadad, 2006) and Arabic (Smith et al., 2005) present similar models. |
Their results show that their high performance NER use less training data than other systems. | 0 | In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The semantic caseframe expectations are used in two ways. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Either save money at any cost - or give priority to education. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Evaluation metrics used are Attachment Score (AS), i.e. the proportion of tokens that are attached to the correct head, and Exact Match (EM), i.e. the proportion of sentences for which the dependency graph exactly matches the gold standard. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | The discovered paraphrases can be a big help to reduce human labor and create a more comprehensive pattern set. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Time starts when Moses is launched and therefore includes model loading time. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 7 Acknowledgements. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | Almost all annotators expressed their preference to move to a ranking-based evaluation in the future. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | These models generally outperform our memory consumption but are much slower, even when cached. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Little attention, however, has been paid to the structural descriptions that these formalisms can assign to strings, i.e. their strong generative capacity. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | The out-of-domain test set differs from the Europarl data in various ways. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.