source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Judges varied in the average score they handed out. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 0 70.9 42. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Our System Wang, Li, and Chang a. 1\!f!IP Eflltii /1\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. "*:t: w _t ff 1 "* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 'music by Chen Zhongshen ' huang2rong2 youlyoul de dao4 'Huang Rong said soberly' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 'after the county president You Qing had assumed the position' lin2 quan2 'Lin Quan' wang2jian4 'Wang Jian' oulyang2-ke4 'Ouyang Ke' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 'because it cannot permit Taiwan Independence so' silfa3-yuan4zhang3 lin2yang2-gang3 'president of the Judicial Yuan, Lin Yanggang' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol 'Lin Zhanghu will give an ex planation live' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 'in two years the distributed money will stop' gaoltangl da4chi2 ye1zi0 fen3 'chicken stock, a tablespoon of coconut flakes' you2qingl ru4zhu3 xian4fu3 lwu4 'after You Qing headed the county government' Table 5 Performance on morphological analysis. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | We model each parse as the decisions made to create it, and model those decisions as independent events. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | As in boosting, the algorithm works in rounds. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Hyperparameter settings are sorted according to the median one-to-one metric over runs. |
Their results show that their high performance NER use less training data than other systems. | 0 | Location list is processed into a list of unigrams and bigrams (e.g., New York). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Reflexive pronouns with only 1 NP in scope.. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | First, we use a novel graph-based framework for projecting syntactic information across language boundaries. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | 2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | This leads to a linear combination of domain-specific probabilities, with weights in [0, 1], normalized to sum to 1. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Here we push the single-framework conjecture across the board and present a single model that performs morphological segmentation and syntactic disambiguation in a fully generative framework. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | To control for the effect of the HSPELL-based pruning, we also experimented with a morphological analyzer that does not perform this pruning. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD). |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We would like to thank Prof. Ralph Grish- man, Mr. Takaaki Hasegawa and Mr. Yusuke Shinyama for useful comments, discussion and evaluation. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | This work was funded by NSF grant IRI-9502312. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Table 1 The cost as a novel given name (second position) for hanzi from various radical classes. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Finally, we add âDTâ to the tags for definite nouns and adjectives (Kulick et al., 2006). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | This paper describes the several performance techniques used and presents benchmarks against alternative implementations. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | (2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently. |
The corpus was annoted with different linguitic information. | 0 | The rhetorical structure annotations of PCC have all been converted to URML. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 5.2 Discussion. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The SynRole KS computes the relative frequency with which the candidatesâ syntactic role (subject, direct object, PP object) appeared in resolutions in the training set. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , Ï) component. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Taking only the highest frequency rules is much "safer", as they tend to be very accurate. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998). |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | (7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | This may be the sign of a maturing research environment. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | This allow the learners to "bootstrap" each other by filling the labels of the instances on which the other side has abstained so far. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | For instance, for TTS it is necessary to know that a particular sequence of hanzi is of a particular category because that knowl edge could affect the pronunciation; consider, for example the issues surrounding the pronunciation of ganl I qian2 discussed in Section 1. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 3.1 Lexicon Component. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | A promising direction for future work is to explicitly model a distribution over tags for each word type. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 3 58.3 40. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | • Similarly, when the naïve Bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 0.5 to be accepted, there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesis. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:t¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and i¥inl. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The approach has been successfully tested on the 8 000-word Verbmobil task. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | When keys are longer than 64 bits, we conserve space by replacing the keys with their 64-bit hashes. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | This model is easily incorporated into the segmenter by building a WFST restrict ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name. |
A beam search concept is applied as in speech recognition. | 0 | The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Each model was able to produce hypotheses for all input sentences. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Diacritics can also be used to specify grammatical relations such as case and gender. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | This paper is based on work supported in part by DARPA through IBM. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Find keywords for each NE pair The keywords are found for each NE category pair. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Juri Ganitkevitch answered questions about Joshua. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | However, in practice, unknown word models also make the distribution improper. |
Here both parametric and non-parametric models are explored. | 0 | The combining algorithm is presented with the candidate parses and asked to choose which one is best. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Step 3. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | For example, take ..., says Maury Cooper, a vice president at S.&P. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | For the automatic scoring method BLEU, we can distinguish three quarters of the systems. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | During development, we sensed that the Recency and Syn- role KSs did not deserve to be on equal footing with the other KSs because their knowledge was so general. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | of Articles No. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | This suggests that different types of anaphora may warrant different treatment: definite NP resolution may depend more on lexical semantics, while pronoun resolution may depend more on contextual semantics. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | So far the graph has been completely unlabeled. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | We compare three hash tables: our probing implementation, GCC’s hash set, and Boost’s8 unordered. |
Here we present two algorithms. | 0 | (Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations). |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004). |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | ments contained 322 anaphoric links. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Like the string languages of MCTAG's, the complexity of the path set increases as the cardinality of the elementary tee sets increases, though both the string languages and path sets will always be semilinear. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | If the same pair of NE instances is used with different phrases, these phrases are likely to be paraphrases. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | One obvious application is information extraction. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Both parametric and non-parametric models are explored. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | All improvements over the baseline are statistically significant beyond the 0.01 level (McNemar’s test). |
This corpus has several advantages: it is annotated at different levels. | 0 | 9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | phrase (markContainsVerb). |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Microsoft’s approach uses dependency trees, others use hierarchical phrase models. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 1 | A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Even if an example like this is not labeled, it can be interpreted as a "hint" that Mr and president imply the same category. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | In this situation, BABAR takes the conservative approach and declines to make a resolution. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | For all languages we do not make use of a tagging dictionary. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Sentences and systems were randomly selected and randomly shuffled for presentation. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | For more on the participating systems, please refer to the respective system description in the proceedings of the workshop. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | We hypothesize that modeling morphological information will greatly constrain the set of possible tags, thereby further refining the representation of the tag lexicon. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.