source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Here both parametric and non-parametric models are explored. | 0 | The results in Table 2 were achieved on the development set. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | The effect of UNIQ is very small on both data sets. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully. |
All the texts were annotated by two people. | 0 | The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | In the labeled version of these metrics (L) both heads and arc labels must be correct, while the unlabeled version (U) only considers heads. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Other work includes transferring latent topic distributions from source to target language for LM adaptation, (Tam et al., 2007) and adapting features at the sentence level to different categories of sentence (Finch and Sumita, 2008). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Ex: Mr. Cristiani is the president ... |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In many cases these failures in recall would be fixed by having better estimates of the actual prob abilities of single-hanzi words, since our estimates are often inflated. |
Here we present two algorithms. | 1 | We present two algorithms. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.' |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | It also does not prune, so comparing to our pruned model would be unfair. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di graphernic word. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items. |
Here both parametric and non-parametric models are explored. | 0 | Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Wu and Fung introduce an evaluation method they call nk-blind. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Table 2 shows these similarity measures. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | When finished, the whole material is written into an XML-structured annotation file. |
Their results show that their high performance NER use less training data than other systems. | 0 | As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (3) shows learning curves for CoBoost. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words. |
Here both parametric and non-parametric models are explored. | 0 | Call the crossing constituents A and B. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3. |
A beam search concept is applied as in speech recognition. | 0 | This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | !!:\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 ⢠10.03 13... |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | We used this data to build an unpruned ARPA file with IRSTLM’s improved-kneser-ney option and the default three pieces. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | For speed, we plan to implement the direct-mapped cache from BerkeleyLM. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Lexicon Feature: The string of the token is used as a feature. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | (2) was extended to have an additional, innermost loop over the (3) possible labels. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | (7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In the next section, we show how an ATM can accept the strings generated by a grammar in a LCFRS formalism in logspace, and hence show that each family can be recognized in polynomial time. |
Here we present two algorithms. | 0 | The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | 18 We are grateful to ChaoHuang Chang for providing us with this set. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | (Kehler, 1997) also used a DempsterShafer model to merge evidence from different sources for template-level coreference. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The type-level posterior term can be computed according to, P (Ti|W , T âi, β) â Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | needs to be in initCaps to be considered for this feature. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The evaluation framework for the shared task is similar to the one used in last year’s shared task. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Since the parameter and token components will remain fixed throughout experiments, we briefly describe each. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Recall. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | "c' 0 + 0 "0 ' ⢠+ a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y ⢠Taiwan 0 ·;; 0 c CD E i5 0"' 9 9 ⢠Mainland ⢠⢠⢠⢠-0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | If either case is true, then CFLex reports that the anaphor and candidate might be coreferent. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1 <i <n, the following virtual distribution, As before, Ztl is a normalization constant. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | So we decided to use semantic class information only to rule out candidates. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Across all languages, high performance can be attained by selecting a single tag per word type. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization. |
A beam search concept is applied as in speech recognition. | 0 | The quasi-monotone search performs best in terms of both error rates mWER and SSER. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 37 84. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The problem of "noise" items that do not fall into any of the three categories also needs to be addressed. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Most similar is scrolling queries, wherein left-to-right queries that add one word at a time are optimized. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Both implementations employ a state object, opaque to the application, that carries information from one query to the next; we discuss both further in Section 4.2. |
This assumption, however, is not inherent to type-based tagging models. | 0 | We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | All notions of word, with the exception of the orthographic word, are as relevant in Chinese as they are in English, and just as is the case in other languages, a word in Chinese may correspond to one or more symbols in the orthog 1 For a related approach to the problem of word-segrnention in Japanese, see Nagata (1994), inter alia.. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Input: Ja , wunderbar . Konnen wir machen . MonS: Yes, wonderful. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Participants and other volunteers contributed about 180 hours of labor in the manual evaluation. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2). |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Although these existential NPs do not need a prior referent, they may occur multiple times in a document. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | We define the lattice L to be the concatenation of the lattices Li corresponding to the input words wi (s.t. |
This assumption, however, is not inherent to type-based tagging models. | 0 | A novel element of our model is the ability to capture type-level tag frequencies. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | In a model we built with default settings, 1.2% of n + 1-grams were missing their ngram suffix. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | They cluster NE instance pairs based on the words in the contexts using a bag- of-words method. |
Here both parametric and non-parametric models are explored. | 0 | The substructures that are unanimously hypothesized by the parsers should be preserved after combination, and the combination technique should not foolishly create substructures for which there is no supporting evidence. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | For parsing, the most challenging form of ambiguity occurs at the discourse level. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | We received submissions from 14 groups from 11 institutions, as listed in Figure 2. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The adjunction operation with respect to tree sets (multicomponent adjunction) is defined as follows. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Recently, statistical NERs have achieved results that are comparable to hand-coded systems. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Our approach uses word-to-word dependencies between source and target words. |
The AdaBoost algorithm was developed for supervised learning. | 0 | (3) shows learning curves for CoBoost. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | In this particular case, all English vertices are labeled as nouns by the supervised tagger. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Our representation of contextual roles is based on information extraction patterns that are converted into simple caseframes. |
The texts were annotated with the RSTtool. | 0 | 3.2 Stochastic rhetorical analysis. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | For example, in the phrase âCompany-A last week purchased rival Marshalls from Company-Bâ, the purchased company is Marshalls, not Company-B. |
Subsets and Splits