source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Furthermore, the systematic way in which particles are prefixed to one another and onto an open-class category gives rise to a distinct sort of morphological ambiguity: space-delimited tokens may be ambiguous between several different segmentation possibilities. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.' |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | We then evaluate the approach in two steps. |
The corpus was annoted with different linguitic information. | 0 | For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement. |
Here we present two algorithms. | 0 | The problem can be represented as a graph with 2N vertices corresponding to the members of X1 and X2. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 0 55.3 34. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Mai. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | This process produces a large set of caseframes coupled with a list of the noun phrases that they extracted. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, âflowingâ way rather than with abrupt shifts of attention. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | For sorted lookup, we compare interpolation search, standard C++ binary search, and standard C++ set based on red-black trees. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Most of these groups follow a phrase-based statistical approach to machine translation. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | For the automatic evaluation, we used BLEU, since it is the most established metric in the field. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 36. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Once HMM parameters (θ, Ï) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from Ï. |
All the texts were annotated by two people. | 0 | Besides information structure, the second main goal is to enhance current models of rhetorical structure. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | (7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | To optimize left-to-right queries, we extend state to store backoff information: where m is the minimal context from Section 4.1 and b is the backoff penalty. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | For unaligned words, we set the tag to the most frequent tag in the corresponding treebank. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | 3 54.4 33. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Each feature group can be made up of many binary features. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The weak hypothesis chosen was then restricted to be a predictor in favor of this label. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | For the inverted alignment probability p(bijbiô1; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | On the surface, our model may seem as a special case of Cohen and Smith in which α = 0. |
There are clustering approaches that assign a single POS tag to each word type. | 1 | On one end of the spectrum are clustering approaches that assign a single POS tag to each word type (Schutze, 1995; Lamar et al., 2010). |
This assumption, however, is not inherent to type-based tagging models. | 0 | 5 68.1 34. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Each xt E 2x is the set of features constituting the ith example. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | But Arabic contains a variety of linguistic phenomena unseen in English. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Other good classes include JADE and GOLD; other bad classes are DEATH and RAT. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Note, however, that there might be situations in which Zco in fact increases. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The addition of vertical markovization enables non-pruned models to outperform all previously reported re12Cohen and Smith (2007) make use of a parameter (α) which is tuned separately for each of the tasks. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | With each iteration more examples are assigned labels by both classifiers, while a high level of agreement (> 94%) is maintained between them. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | For example, the two NEs âEastern Group Plcâ and âHanson Plcâ have the following contexts. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The final score is obtained from: max e;e0 j2fJôL;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | 3.1 Lexicon Component. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The algorithm builds two classifiers in parallel from labeled and unlabeled data. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 68 96. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | This alters generation of T as follows: n P (T |Ï) = n P (Ti|Ï) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | In MUC6, the best result is achieved by SRA (Krupka, 1995). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | The first stage identifies a keyword in each phrase and joins phrases with the same keyword into sets. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | During development, we sensed that the Recency and Syn- role KSs did not deserve to be on equal footing with the other KSs because their knowledge was so general. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The first two rows of the table are baselines. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | 971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Pumping t2 will change only one branch and leave the other branch unaffected. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009). |
The corpus was annoted with different linguitic information. | 0 | Figure 1: Translation of PCC sample commentary (STTS)2. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | 2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | It is not easy to make a clear definition of âparaphraseâ. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | pronunciations of individual words; they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 19 We note that it is not always clear in Wang, Li, and Chang's examples which segmented words. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This is appropriate in cases where it is sanctioned by Bayes’ law, such as multiplying LM and TM probabilities, but for adaptation a more suitable framework is often a mixture model in which each event may be generated from some domain. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | The first setting uses the European Medicines Agency (EMEA) corpus (Tiedemann, 2009) as IN, and the Europarl (EP) corpus (www.statmt.org/europarl) as OUT, for English/French translation in both directions. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | where the husband's family name is optionally prepended to the woman's full name; thus ;f:*lf#i xu3lin2-yan2hai3 would represent the name that Ms. Lin Yanhai would take if she married someone named Xu. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | One knowledge source, called WordSemCFSem, is analogous to CFLex: it checks whether the anaphor and candidate antecedent are substitutable for one another, but based on their semantic classes instead of the words themselves. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Along with IRSTLM and TPT, our binary format is memory mapped, meaning the file and in-memory representation are the same. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 4 69.0 51. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | The final block in table 2 shows models trained on feature subsets and on the SVM feature described in 3.4. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | (1992). |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | In the terrorism domain, 1600 texts were used for training and the 40 test docu X â©Y =â
All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | A non-optimal analysis is shown with dotted lines in the bottom frame. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Here, an NE instance pair is any pair of NEs separated by at most 4 syntactic chunks; for example, âIBM plans to acquire Lotusâ. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The sentence length probability p(JjI) is omitted without any loss in performance. |
The corpus was annoted with different linguitic information. | 0 | And indeed, converging on annotation guidelines is even more difficult than it is with co-reference. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | So, 1: f, xue2shengl+men0 (student+PL) 'students' occurs and we estimate its cost at 11.43; similarly we estimate the cost of f, jiang4+men0 (general+PL) 'generals' (as in 'J' f, xiao3jiang4+men0 'little generals'), at 15.02. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third Tai wan speaker is also close in the most significant dimension (the x axis). |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | In the namedentity problem each example is a (spelling,context) pair. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ôL; ; Jg. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 4 69.0 51. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Hash tables are a common sparse mapping technique used by SRILM’s default and BerkeleyLM’s hashed variant. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | A novel element of our model is the ability to capture type-level tag frequencies. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.