[{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 \u2022 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 \u2022upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 \u2022upv \u2022systran upcntt \u2022 rali upc-jmc \u2022 cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022upv -0.5 \u2022systran \u2022upv upc -jmc \u2022 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 \u2022 \u2022 \u2022 td t cc upc- \u2022 rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 \u2022upv -0.4 \u2022upv -0.3 In Domain \u2022upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain \u2022upc-jmc \u2022nrc \u2022ntt Adequacy upc-jmc \u2022 \u2022 \u2022lcc \u2022 rali \u2022 \u2022rali -0.7 -0.5 -0.6 \u2022upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 \u2022 \u2022rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt \u2022 upc-mr \u2022lcc \u2022utd \u2022upc-jg \u2022rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022upc-jmc \u2022 uedin-birch -0.5 -0.5 \u2022upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc \u2022 upc-jmc \u2022systran \u2022upv Fluency \u2022ula \u2022upc-mr \u2022lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022systran \u2022upv \u2022uedin-phi -jmc \u2022rali \u2022systran -0.3 -0.4 -0.5 -0.6 \u2022upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi \u2022 \u2022 \u2022utd \u2022upc-jmc \u2022upc-mr 0.4 \u2022rali -0.3 -0.4 -0.5 \u2022upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(\u00e2\u0088\u0092i), w, \u00ce\u00b1) \u00e2\u0088\u009d n P (w|Ti, t(\u00e2\u0088\u0092i), w(\u00e2\u0088\u0092i), \u00ce\u00b1) (tb ,ta ) P (Ti, t(i)|T , W , t(\u00e2\u0088\u0092i), w, \u00ce\u00b1, \u00ce\u00b2) = P (T |tb, t(\u00e2\u0088\u0092i), \u00ce\u00b1)P (ta|T , t(\u00e2\u0088\u0092i), \u00ce\u00b1) \u00e2\u0088\u0092i (i) i i (\u00e2\u0088\u0092i) P (Ti|W , T \u00e2\u0088\u0092i, \u00ce\u00b2)P (t |Ti, t , w, \u00ce\u00b1) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(\u00e2\u0088\u0092i)where T \u00e2\u0088\u0092i denotes all type-level tag assignment ex cept Ti and t(\u00e2\u0088\u0092i) denotes all token-level tags except and w (\u00e2\u0088\u0092i) (Johnson, 2007).","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.","target_text":"A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?","label":0},{"source_text":"One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.","target_text":"This is not an oversight.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.","label":0},{"source_text":"They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.","target_text":"The correct ambiguity resolution of the syntactic level therefore helps to resolve the morphological one, and vice versa.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.","label":0},{"source_text":"The corpus was annoted with different linguitic information.","target_text":"2.5 Connectives with scopes.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.","label":0},{"source_text":"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.","target_text":"2.1 Reliable Case Resolutions.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"Figure 3 shows examples of semantic expectations that were learned.","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.","target_text":"To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year\u2019s shared task.","label":0},{"source_text":"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","target_text":"This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure.","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.","label":0},{"source_text":"The resulting model is compact, efficiently learnable and linguistically expressive.","target_text":"Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"Gather phrases using keywords Now, the keyword with the top TF\/ITF score is selected for each phrase.","label":0},{"source_text":"In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.","target_text":"Figure 2 shows examples of extracted NE pair instances and their contexts.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"The probability distribution that satisfies the above property is the one with the highest entropy.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"Taking only the highest frequency rules is much "safer", as they tend to be very accurate.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.","label":0},{"source_text":"This paper conducted research in the area of automatic paraphrase discovery.","target_text":"In this domain the major scenarios involve the things they agreed on, rather than the mere fact that they agreed.","label":0},{"source_text":"Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.","target_text":"We are very grateful to Tony Kroc.h, Michael Pails, Sunil Shende, and Mark Steedman for valuable discussions. formalisms.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).","label":0},{"source_text":"In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.","target_text":"For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models\u2019 performance against previously reported results.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"an event.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"This is orthographically represented as 7C.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period.","label":0},{"source_text":"This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.","target_text":"This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.","label":0},{"source_text":"On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.","target_text":"This family represents an attempt to generalize the properties shared by CFG's, HG's, TAG's, and MCTAG's.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word.","label":0},{"source_text":"They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.","target_text":"This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993).","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(\u00e2\u0088\u0092i), w, \u00ce\u00b1) \u00e2\u0088\u009d n P (w|Ti, t(\u00e2\u0088\u0092i), w(\u00e2\u0088\u0092i), \u00ce\u00b1) (tb ,ta ) P (Ti, t(i)|T , W , t(\u00e2\u0088\u0092i), w, \u00ce\u00b1, \u00ce\u00b2) = P (T |tb, t(\u00e2\u0088\u0092i), \u00ce\u00b1)P (ta|T , t(\u00e2\u0088\u0092i), \u00ce\u00b1) \u00e2\u0088\u0092i (i) i i (\u00e2\u0088\u0092i) P (Ti|W , T \u00e2\u0088\u0092i, \u00ce\u00b2)P (t |Ti, t , w, \u00ce\u00b1) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(\u00e2\u0088\u0092i)where T \u00e2\u0088\u0092i denotes all type-level tag assignment ex cept Ti and t(\u00e2\u0088\u0092i) denotes all token-level tags except and w (\u00e2\u0088\u0092i) (Johnson, 2007).","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"Hence we decided to select ten commentaries to form a \u00e2\u0080\u0098core corpus\u00e2\u0080\u0099, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence.","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"Diacritics can also be used to specify grammatical relations such as case and gender.","label":0},{"source_text":"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.","target_text":"This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1).","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"9 61.0 44.","label":0},{"source_text":"Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.","target_text":"Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"Reading the following record\u2019s offset indicates where the block ends.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"On the other hand, we are interested in the application of rhetorical analysis or \u00e2\u0080\u0098discourse parsing\u00e2\u0080\u0099 (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).","label":0},{"source_text":"Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.","target_text":"Gazdar (1985) argues this is the appropriate analysis of unbounded dependencies in the hypothetical Scandinavian language Norwedish.","label":0},{"source_text":"The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.","target_text":"Using this encoding scheme, the arc from je to Z in Figure 2 would be assigned the label AuxP\u2191Sb (signifying an AuxP that has been lifted from a Sb).","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"A search restriction especially useful for the translation direction from German to English is presented.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"If either case is true, then CFLex reports that the anaphor and candidate might be coreferent.","label":0},{"source_text":"The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.","target_text":"Word Re-ordering and DP-based Search in Statistical Machine Translation","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.","target_text":"was done by the participants.","label":0},{"source_text":"BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.","target_text":"The goal of our research was to explore the use of contextual role knowledge for coreference resolution.","label":0},{"source_text":"However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.","target_text":"Performance typically stabilizes across languages after only a few number of iterations.","label":0},{"source_text":"In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.","target_text":"In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"The trie data structure is commonly used for language modeling.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"The result of this is shown in Figure 7.","label":0},{"source_text":"A beam search concept is applied as in speech recognition.","target_text":"The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either \u00c3\u0086 = 0 or \u00c3\u0086 = 1 new target words.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"+ cost(unseen(fm, as desired.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"Further, it needs extra pointers in the trie, increasing model size by 40%.","label":0},{"source_text":"However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.","target_text":"\u00ce\u00b2 is the shared hyperparameter for the tag assignment prior and word feature multinomials.","label":0},{"source_text":"The resulting model is compact, efficiently learnable and linguistically expressive.","target_text":"This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity.","label":0},{"source_text":"Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","target_text":"In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"irL as the product of the probability estimate for i\u00c2\u00a5JJ1l., and the probability estimate just derived for unseen plurals in ir,: p(i\u00c2\u00a51J1l.ir,) p(i\u00c2\u00a51J1l.)p(unseen(f,)).","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993).","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"29 \u00e2\u0080\u0094 95.","label":0},{"source_text":"They focused on phrases which two Named Entities, and proceed in two stages.","target_text":"Evaluation results within sets Table 1 shows the evaluation result based on the number of phrases in a set.","label":0},{"source_text":"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","target_text":"We extend the Matsoukas et al approach in several ways.","label":0},{"source_text":"The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).","target_text":"(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"Table 3 shows BABAR\u00e2\u0080\u0099s performance when the four contextual role knowledge sources are added.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (\u00cf\u0086, \u00ce\u00b8) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|\u00cf\u0086, \u00ce\u00b8) = P (T , W , \u00ce\u00b8, \u00cf\u0088, \u00cf\u0086, t, w|\u00ce\u00b1, \u00ce\u00b2) = P (T , W , \u00cf\u0088|\u00ce\u00b2) [Lexicon] \u00ef\u00a3\u00ab n n \u00ef\u00a3\u00ad (w,t)\u00e2\u0088\u0088(w,t) j \u00ef\u00a3\u00b6 P (tj |\u00cf\u0086tj\u00e2\u0088\u00921 )P (wj |tj , \u00ce\u00b8tj )\u00ef\u00a3\u00b8 P (\u00cf\u0086, \u00ce\u00b8|T , \u00ce\u00b1, \u00ce\u00b2) [Parameter] P (w, t|\u00cf\u0086, \u00ce\u00b8) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively.","label":0},{"source_text":"They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.","target_text":"This combination generalizes (2) and (3): we use either at = a to obtain a fixed-weight linear combination, or at = cI(t)\/(cI(t) + 0) to obtain a MAP combination.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system.","label":0},{"source_text":"Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.","target_text":"The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.","target_text":"Manual and Automatic Evaluation of Machine Translation between European Languages","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially.","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"Until now, all evaluations of Arabic parsing\u00e2\u0080\u0094including the experiments in the previous section\u00e2\u0080\u0094have assumed gold segmentation.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"2 61.7 64.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"Further, the probing hash table does only one random lookup per query, explaining why it is faster on large data.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"If \u00e2\u0080\u009cgun\u00e2\u0080\u009d and \u00e2\u0080\u009crevolver\u00e2\u0080\u009d refer to the same object, then it should also be acceptable to say that Fred was \u00e2\u0080\u009ckilled with a gun\u00e2\u0080\u009d and that the burglar \u00e2\u0080\u009cfireda revolver\u00e2\u0080\u009d.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.","target_text":"0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 \u2022systran \u2022 ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency \u2022systran \u2022nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p \ufffd 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain \u2022upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 \u2022upv 23 24 25 26 27 28 29 30 31 32 \u2022upc-mr \u2022utd \u2022upc-jmc \u2022uedin-birch \u2022ntt \u2022rali \u2022uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy \u2022upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency \u2022ntt \u2022nrc \u2022rali \u2022uedin-birch -0.2 -0.3 -0.5 \u2022upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr \u2022 rali Fluency -0.4 \u2022upc-mr utd \u2022upc-jmc -0.5 -0.6 \u2022upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 \u2022upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 \u2022upv 0.5 0.4 \u2022systran \u2022upc-mr \u2022 \u2022rali 0.3 \u2022ntt 0.2 0.1 -0.0 -0.1 \u2022systran \u2022upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022upv -0.5 \u2022upv \u2022systran \u2022upc-mr \u2022 Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 \u2022systran \u2022ntt","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"Performance typically stabilizes across languages after only a few number of iterations.","label":0},{"source_text":"This corpus has several advantages: it is annotated at different levels.","target_text":"Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).","label":0},{"source_text":"It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.","target_text":"The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle).","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.","label":0},{"source_text":"A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.","target_text":"The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem.","label":0},{"source_text":"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","target_text":"We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function.","label":0},{"source_text":"The resulting model is compact, efficiently learnable and linguistically expressive.","target_text":"This approach makes the training objective more complex by adding linear constraints proportional to the number of word types, which is rather prohibitive.","label":0},{"source_text":"An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.","target_text":"The inverted alignment probability p(bijbi\u00f4\u0080\u0080\u00801; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.","label":0},{"source_text":"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","target_text":"Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name.","label":0},{"source_text":"Combining multiple highly-accurate independent parsers yields promising results.","target_text":"The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.","label":0},{"source_text":"While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.","target_text":"A few annotators suggested to break up long sentences into clauses and evaluate these separately.","label":0},{"source_text":"This paper talks about Pseudo-Projective Dependency Parsing.","target_text":"Although the best published results for the Collins parser is 80% UAS (Collins, 1999), this parser reaches 82% when trained on the entire training data set, and an adapted version of Charniak\u2019s parser (Charniak, 2000) performs at 84% (Jan Haji\u02c7c, pers. comm.).","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"This is orthographically represented as 7C.","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"Context from the whole document can be important in classifying a named entity.","label":0},{"source_text":"They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.","target_text":"The Input The set of analyses for a token is thus represented as a lattice in which every arc corresponds to a specific lexeme l, as shown in Figure 1.","label":0},{"source_text":"The approach assumes that the word reordering is restricted to a few positions in the source sentence.","target_text":"Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).","label":0},{"source_text":"All the texts were annotated by two people.","target_text":"3.5 Improved models of discourse.","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"Human evaluation is one way to distinguish between the two cases.","label":0},{"source_text":"The AdaBoost algorithm was developed for supervised learning.","target_text":"(Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different "views" of an example.","label":0},{"source_text":"The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.","target_text":"Words and punctuation that appear in brackets are considered optional.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"6 Conclusions.","label":0},{"source_text":"The AdaBoost algorithm was developed for supervised learning.","target_text":"Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"9 66.4 47.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"This is not ideal for some applications, however.","label":0},{"source_text":"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.","target_text":"For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.","label":0},{"source_text":"The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.","target_text":"This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis.","label":0},{"source_text":"A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.","target_text":"The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem.","label":0},{"source_text":"There is no global pruning.","target_text":"The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors.","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"On several languages, we report performance exceeding that of state-of-the art systems.","label":0},{"source_text":"Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.","target_text":"We settled on contrastive evaluations of 5 system outputs for a single test sentence.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"Simple Type-Level Unsupervised POS Tagging","label":0},{"source_text":"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","target_text":"In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations \u00e2\u0080\u0094 or groups of relations in particular configurations \u00e2\u0080\u0094 are signalled by speakers with prosodic means.","label":0},{"source_text":"One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.","target_text":"This is the parse that is closest to the centroid of the observed parses under the similarity metric.","label":0},{"source_text":"The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.","target_text":"It is difficult when IN and OUT are dissimilar, as they are in the cases we study.","label":0},{"source_text":"They proposed an unsupervised method to discover paraphrases from a large untagged corpus.","target_text":"Only 2 link in the CC- domain (buy-purchase, acquire-acquisition) and 2 links (trader-dealer and head-chief) in the PC- domain are found in the same synset of Word- Net 2.1 (http:\/\/wordnet.princeton.edu\/).","label":0},{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU.","label":0},{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.","label":0},{"source_text":"The second algorithm builds on a boosting algorithm called AdaBoost.","target_text":"This section describes AdaBoost, which is the basis for the CoBoost algorithm.","label":0},{"source_text":"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.","target_text":"In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned.","label":0},{"source_text":"The texts were annotated with the RSTtool.","target_text":"When the connective is an adverbial, there is much less clarity as to the range of the spans.","label":0},{"source_text":"It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.","target_text":"As can be seen, GR and this \"pared-down\" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.","label":0},{"source_text":"This paper conducted research in the area of automatic paraphrase discovery.","target_text":"In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.","target_text":"We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"M(wi) = Li).","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"Keywords with more than one word In the evaluation, we explained that \u00e2\u0080\u009cchairman\u00e2\u0080\u009d and \u00e2\u0080\u009cvice chairman\u00e2\u0080\u009d are considered paraphrases.","label":0},{"source_text":"In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.","target_text":"The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"When this feature type was included, CoBoost chose this default feature at an early iteration, thereby giving non-abstaining pseudo-labels for all examples, with eventual convergence to the two classifiers agreeing by assigning the same label to almost all examples.","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"08 84.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"Two issues distinguish the various proposals.","label":0},{"source_text":"Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.","target_text":"The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"Reflexive pronouns with only 1 NP in scope..","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"08 84.","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"0 57.3 51.","label":0},{"source_text":"They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.","target_text":"We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.","label":0},{"source_text":"Here we present two algorithms.","target_text":"We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Following Sproat and Shih (1990), performance for Chinese segmentation systems is generally reported in terms of the dual measures of precision and recalP It is fairly standard to report precision and recall scores in the mid to high 90% range.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"In contrast to the Bayesian HMM, \u00ce\u00b8t is not drawn from a distribution which has support for each of the n word types.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"(a) I f f fi * fi :1 }'l ij 1\u00c2\u00a7: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?'","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ\u00c2\u00ad ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of students\u00e2\u0080\u0099 work in course projects, and to some extent of paid assistentships.","label":0},{"source_text":"Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.","target_text":"Variants of alif are inconsistently used in Arabic texts.","label":0},{"source_text":"Here we present two algorithms.","target_text":"We now describe the CoBoost algorithm for the named entity problem.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension.","label":0},{"source_text":"The texts were annotated with the RSTtool.","target_text":"Besides information structure, the second main goal is to enhance current models of rhetorical structure.","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3.","label":0},{"source_text":"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","target_text":"It is not immediately obvious how to formulate an equivalent to equation (1) for an adapted TM, because there is no well-defined objective for learning TMs from parallel corpora.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"This representation gives ir, an appropriate morphological decomposition, pre\u00c2\u00ad serving information that would be lost by simply listing ir, as an unanalyzed form.","label":0},{"source_text":"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.","target_text":"Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"In the terrorism domain, 1600 texts were used for training and the 40 test docu X \u00e2\u0088\u00a9Y =\u00e2\u0088\u0085 All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence.","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"The resulting model is compact, efficiently learnable and linguistically expressive.","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"3.1 Gross Statistics.","label":0},{"source_text":"They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.","target_text":"This significantly underperforms log-linear combination.","label":0},{"source_text":"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","target_text":"To set \u03b2, we used the same criterion as for \u03b1, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose \u201crelevant\u201d sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L\u00a8u et al., 2007), or individual target hypotheses (Zhao et al., 2004).","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"See Section 5.","label":0},{"source_text":"Combining multiple highly-accurate independent parsers yields promising results.","target_text":"The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.","label":0},{"source_text":"The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.","target_text":"The final model tions.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"It is. based on the traditional character set rather than the simplified character set used in Singapore and Mainland China.","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"Table 9: Dev set results for sentences of length \u00e2\u0089\u00a4 70.","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","label":1},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"Using structural information As was explained in the results section, we extracted examples like \u00e2\u0080\u009cSmith estimates Lotus\u00e2\u0080\u009d, from a sentence like \u00e2\u0080\u009cMr.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"However, using the top-level semantic classes of WordNet proved to be problematic because the class distinctions are too coarse.","label":0},{"source_text":"The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.","target_text":"We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count.","label":0},{"source_text":"The texts were annotated with the RSTtool.","target_text":"Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"Our original hope in combining these parsers is that their errors are independently distributed.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"2.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code\/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols.","label":0},{"source_text":"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","target_text":"While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline.","label":0},{"source_text":"The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.","target_text":"The frequency of the Company \u00e2\u0080\u0093 Company domain ranks 11th with 35,567 examples.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.","label":0},{"source_text":"Their results show that their high performance NER use less training data than other systems.","target_text":"Such constraints are derived from training data, expressing some relationship between features and outcome.","label":0},{"source_text":"The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.","target_text":"We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?","label":0},{"source_text":"Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.","target_text":"For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.","label":0},{"source_text":"The texts were annotated with the RSTtool.","target_text":"5 \u00e2\u0080\u0098Underspecified Rhetorical Markup Language\u00e2\u0080\u0099 6 This confirms the figure given by (Schauer, Hahn.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"Figure 5 shows how this model is implemented as part of the dictionary WFST.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"The second algorithm builds on a boosting algorithm called AdaBoost (Freund and Schapire 97; Schapire and Singer 98).","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"As is standard, we use a fixed constant K for the number of tagging states.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"Surprisingly, this effect is much less obvious for out-of-domain test data.","label":0},{"source_text":"Here we present two algorithms.","target_text":"The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem.","label":0},{"source_text":"There is no global pruning.","target_text":"The perplexity for the trigram language model used is 26:5.","label":0},{"source_text":"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","target_text":"\u00e2\u0080\u00a2 Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol \u00e2\u0080\u0093 the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers.","label":0},{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).","label":0},{"source_text":"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","target_text":"Saving state allows our code to walk the data structure exactly once per query.","label":0},{"source_text":"The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.","target_text":"The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.","label":0},{"source_text":"Here we present two algorithms.","target_text":"We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":" We evaluate the system's performance by comparing its segmentation 'Tudgments\" with the judgments of a pool of human segmenters, and the system is shown to perform quite well.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"On several languages, we report performance exceeding that of state-of-the art systems.","label":0},{"source_text":"The approach assumes that the word reordering is restricted to a few positions in the source sentence.","target_text":"Here, we process only full-form words within the translation procedure.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"Proper-Name Identification.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Another question that remains unanswered is to what extent the linguistic information he considers can be handled-or at least approximated-by finite-state language models, and therefore could be directly interfaced with the segmentation model that we have presented in this paper.","label":0},{"source_text":"The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.","target_text":"The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that \u00e2\u0080\u009creport\u00e2\u0080\u009d things also \u00e2\u0080\u009cadd\u00e2\u0080\u009d and \u00e2\u0080\u009cstate\u00e2\u0080\u009d things; crimes that are \u00e2\u0080\u009cperpetrated\u00e2\u0080\u009d are often later \u00e2\u0080\u009ccondemned\u00e2\u0080\u009d.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"The type-level posterior term can be computed according to, P (Ti|W , T \u00e2\u0088\u0092i, \u00ce\u00b2) \u00e2\u0088\u009d Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM.","label":0},{"source_text":"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","target_text":"7 www.cis.upenn.edu\/\u00e2\u0088\u00bcpdtb\/ 8 www.eml-research.de\/english\/Research\/NLP\/ Downloads had to buy a new car.","label":0},{"source_text":"This paper conducted research in the area of automatic paraphrase discovery.","target_text":"As can be seen in the example, the first two phrases have a different order of NE names from the last two, so we can determine that the last two phrases represent a reversed relation.","label":0},{"source_text":"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","target_text":"CFG's, TAG's, MCTAG's and HG's are all members of this class since they satisfy these restrictions.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"These are shown, with their associated costs, as follows: ABj nc 4.0 AB C\/jj 6.0 CD \/vb 5.","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"2.1 Part-of-speech tags.","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"Within the RST \u00e2\u0080\u009cuser community\u00e2\u0080\u009d there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).","label":0},{"source_text":"They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","target_text":"If is a number string (such as one, two, etc), then the feature NumberString is set to 1.","label":0},{"source_text":"The approach assumes that the word reordering is restricted to a few positions in the source sentence.","target_text":"Additionally, it works about 3 times as fast as the IBM style search.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.'","label":0},{"source_text":"The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.","target_text":"The dependency graph in Figure 1 satisfies all the defining conditions above, but it fails to satisfy the condition ofprojectivity (Kahane et al., 1998): The arc connecting the head jedna (one) to the dependent Z (out-of) spans the token je (is), which is not dominated by jedna.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.","label":0},{"source_text":"The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.","target_text":"In total there are O(K 2) parameters associated with the transition parameters.","label":0},{"source_text":"Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","target_text":"If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.","label":0},{"source_text":"Human judges also pointed out difficulties with the evaluation of long sentences.","target_text":"One annotator suggested that this was the case for as much as 10% of our test sentences.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"6.","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.","label":0},{"source_text":"The second algorithm builds on a boosting algorithm called AdaBoost.","target_text":"Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y\u201e). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 .","label":0},{"source_text":"The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.","target_text":"We measured recall (Rec), precision (Pr), and the F-measure (F) with recall and precision equally weighted.","label":0},{"source_text":"This paper presents a maximum entropy-based named entity recognizer (NER).","target_text":"It uses a maximum entropy framework and classifies each word given its features.","label":0},{"source_text":"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.","target_text":"We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"Memory usage is the same as with binary search and lower than with set.","label":0},{"source_text":"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.","target_text":"We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.","label":1},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi\u00c2\u00ad cal rule-based approaches, and approaches that combine lexical information with sta\u00c2\u00ad tistical information.","label":0},{"source_text":"They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.","target_text":"Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.","label":0},{"source_text":"Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.","target_text":"This is the form of recursive levels in iDafa constructs.","label":0},{"source_text":"The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.","target_text":"The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.","label":0},{"source_text":"It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.","target_text":"Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"ICOC and CSPP contributed the greatest im provements.","label":0},{"source_text":"The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.","target_text":"The quasi-monotone search performs best in terms of both error rates mWER and SSER.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).","label":0},{"source_text":"Replacing this with a ranked evaluation seems to be more suitable.","target_text":"About half of the participants of last year\u2019s shared task participated again.","label":0},{"source_text":"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","target_text":"For example, in TAG's a derived auxiliary tree spans two substrings (to the left and right of the foot node), and the adjunction operation inserts another substring (spanned by the subtree under the node where adjunction takes place) between them (see Figure 3).","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"We observe similar trends when using another measure \u00e2\u0080\u0093 type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 \u00e2\u0080\u0093 \u00e2\u0080\u0093 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 \u00e2\u0080\u0093 \u00e2\u0080\u0093 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 \u00e2\u0080\u0093 \u00e2\u0080\u0093 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 \u00e2\u0080\u0093 \u00e2\u0080\u0093 4 0 . 6 7 3 . 2 \u00e2\u0080\u0093 \u00e2\u0080\u0093 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"In addition, as the CRF and PCFG look at similar sorts of information from within two inherently different models, they are far from independent and optimizing their product is meaningless.","label":0},{"source_text":"Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","target_text":"Our initial experimentation with the evaluation tool showed that this is often too overwhelming.","label":0},{"source_text":"They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","target_text":"We have shown that the maximum entropy framework is able to use global information directly.","label":0},{"source_text":"The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.","target_text":"KenLM: Faster and Smaller Language Model Queries","label":0},{"source_text":"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","target_text":"We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition operations of the constrained grammatical systems that we have considered.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.","label":0},{"source_text":"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","target_text":"With the additional assumptions, inspired by Rounds (1985), we can show that members of this class can be recognized in polynomial time.","label":0},{"source_text":"From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an e\u00c3\u0086cient search algorithm.","target_text":"In Eq.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array.","label":0},{"source_text":"There is no global pruning.","target_text":"However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.","label":0},{"source_text":"In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.","target_text":"Our experiments all concern the analytical annotation, and the first experiment is based only on the training part.","label":0},{"source_text":"The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.","target_text":"Before we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments.","label":0},{"source_text":"Combining multiple highly-accurate independent parsers yields promising results.","target_text":"The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).","label":0},{"source_text":"BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.","target_text":"Sometimes, however, these beliefs can be contradictory.","label":0},{"source_text":"From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an e\u00c3\u0086cient search algorithm.","target_text":"The final score is obtained from: max e;e0 j2fJ\u00f4\u0080\u0080\u0080L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.","label":0},{"source_text":"The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.","target_text":"Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +\u2014 pivot; if A[pivot] > k then u +\u2014 pivot.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"A compatible view is presented by Charniak et al. (1996) who consider the kind of probabilities a generative parser should get from a PoS tagger, and concludes that these should be P(w|t) \u201cand nothing fancier\u201d.3 In our setting, therefore, the Lattice is not used to induce a probability distribution on a linear context, but rather, it is used as a common-denominator of state-indexation of all segmentations possibilities of a surface form.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"evaluated to account for the same fraction of the data.","label":0},{"source_text":"The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.","target_text":"We concentrate on those sets.","label":0},{"source_text":"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.","target_text":"Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).","label":0},{"source_text":"One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.","target_text":"Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.","label":0},{"source_text":"It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.","target_text":"On the English side, however, the vertices (denoted by Ve) correspond to word types.","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"In many cases these failures in recall would be fixed by having better estimates of the actual prob\u00c2\u00ad abilities of single-hanzi words, since our estimates are often inflated.","label":0},{"source_text":"They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.","target_text":"It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Precision.","label":0},{"source_text":"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","target_text":"The toplevel weights are trained to maximize a metric such as BLEU on a small development set of approximately 1000 sentence pairs.","label":0},{"source_text":"Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.","target_text":"), which precludes a single universal approach to adaptation.","label":0},{"source_text":"From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an e\u00c3\u0086cient search algorithm.","target_text":"We apply a beam search concept as in speech recognition.","label":0},{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"This is the first time that we organized a large-scale manual evaluation.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"We further thank Dr. J.-S.","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"We have argued that the proposed method performs well.","label":0},{"source_text":"This topic has been getting more attention, driven by the needs of various NLP applications.","target_text":"Smith estimates Lotus will make a profit this quarter\u00e2\u0080\u00a6\u00e2\u0080\u009d, our system extracts \u00e2\u0080\u009cSmith esti mates Lotus\u00e2\u0080\u009d as an instance.","label":0},{"source_text":"They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.","target_text":"It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model.","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"Only IRSTLM does not support threading.","label":0},{"source_text":"The texts were annotated with the RSTtool.","target_text":"On the other hand, we are interested in the application of rhetorical analysis or \u00e2\u0080\u0098discourse parsing\u00e2\u0080\u0099 (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"Chinese word segmentation can be viewed as a stochastic transduction problem.","label":0},{"source_text":"Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.","target_text":"We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective.","label":0},{"source_text":"In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.","target_text":"For each extension a new position is added to the coverage set.","label":0},{"source_text":"They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","target_text":"Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance.","label":0},{"source_text":"They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.","target_text":"The corpora for both settings are summarized in table 1.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length \u00e2\u0089\u00a4 40.","label":0},{"source_text":"From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an e\u00c3\u0086cient search algorithm.","target_text":"For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is su\u00c3\u0086cient to consider only the best 50 words.","label":0},{"source_text":"Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.","target_text":"But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real difference\u00e2\u0080\u0094 or similarity\u00e2\u0080\u0094between treebanks.","label":0},{"source_text":"Replacing this with a ranked evaluation seems to be more suitable.","target_text":"Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"\u00e2\u0080\u009cThe gun\u00e2\u0080\u009d will be extracted by the caseframe \u00e2\u0080\u009cfired \u00e2\u0080\u009d.","label":0},{"source_text":"Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.","target_text":"We use a squared loss to penalize neighboring vertices that have different label distributions: kqi \u2212 qjk2 = Ey(qi(y) \u2212 qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.","label":0},{"source_text":"The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.","target_text":"Recent work has made significant progress on unsupervised POS tagging (Me\u00c2\u00b4rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009).","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"Some approaches depend upon some form of con\u00c2\u00ad straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach).","label":0},{"source_text":"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.","target_text":"3.5 Improved models of discourse.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # \u00e2\u0080\u009d , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ \u00e2\u0080\u009d . +F EA TS JJ NN S CD NN P UH , PR P$ # . \u00e2\u0080\u009c Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting.","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"This process is repeated 5 times by rotating the data appropriately.","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"The P (T |\u00cf\u0088) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary.","label":0},{"source_text":"They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.","target_text":"This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together.","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"For example, from the sentence \u00e2\u0080\u009cMr.","label":0},{"source_text":"They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.","target_text":"This variant is tested in the experiments below.","label":0},{"source_text":"Human judges also pointed out difficulties with the evaluation of long sentences.","target_text":"The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:","label":0},{"source_text":"Their results show that their high performance NER use less training data than other systems.","target_text":"In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.","label":0},{"source_text":"Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.","target_text":"Jiang and Zhai (2007) suggest the following derivation, making use of the true OUT distribution po(s, t): where each fi(s, t) is a feature intended to charac- !0\u02c6 = argmax pf(s, t) log p\u03b8(s|t) (8) terize the usefulness of (s, t), weighted by Ai. \u03b8 s,t pf(s, t)po(s, t) log p\u03b8(s|t) The mixing parameters and feature weights (col- != argmax po (s, t) lectively 0) are optimized simultaneously using dev- \u03b8 s,t pf(s, t)co(s, t) log p\u03b8(s|t), set maximum likelihood as before: !\ufffdargmax po (s, t) ! \u03b8 s,t \ufffd\u02c6 = argmax \u02dcp(s, t) log p(s|t; 0).","label":0},{"source_text":"The approach assumes that the word reordering is restricted to a few positions in the source sentence.","target_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","label":0},{"source_text":"There are clustering approaches that assign a single POS tag to each word type.","target_text":"5 67.3 55.","label":0},{"source_text":"The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.","target_text":"Other approaches encode sparsity as a soft constraint.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.","target_text":"Different sentence structure and rich target language morphology are two reasons for this.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","label":0},{"source_text":"In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.","target_text":"We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"Our second point of comparison is with Grac\u00c2\u00b8a et al.","label":0},{"source_text":"There are clustering approaches that assign a single POS tag to each word type.","target_text":"6 Results and Analysis.","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"f;g denotes the empty set, where no source sentence position is covered.","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"For each domain, phrases which contain the same keyword are gathered to build a set of phrases (Step 3).","label":0},{"source_text":"In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.","target_text":"While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context.","label":0},{"source_text":"Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.","target_text":"We collected around 300\u2013400 judgements per judgement type (adequacy or fluency), per system, per language pair.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"(Other classes handled by the current system are discussed in Section 5.)","label":0},{"source_text":"While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.","target_text":"This revealed interesting clues about the properties of automatic and manual scoring.","label":0},{"source_text":"This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.","target_text":"Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).","label":0},{"source_text":"The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.","target_text":"In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes.","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"buy - acquire (5) buy - agree (2) buy - purchase (5) buy - acquisition (7) buy - pay (2)* buy - buyout (3) buy - bid (2) acquire - purchase (2) acquire - acquisition (2) acquire - pay (2)* purchase - acquisition (4) purchase - stake (2)* acquisition - stake (2)* unit - subsidiary (2) unit - parent (5) It is clear that these links form two clusters which are mostly correct.","label":0},{"source_text":"They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.","target_text":"The techniques we develop can be extended in a relatively straightforward manner to the more general case when OUT consists of multiple sub-domains.","label":0},{"source_text":"Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","target_text":"However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.","label":0},{"source_text":"Replacing this with a ranked evaluation seems to be more suitable.","target_text":"A few annotators suggested to break up long sentences into clauses and evaluate these separately.","label":0},{"source_text":"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","target_text":"Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.","label":0},{"source_text":"They focused on phrases which two Named Entities, and proceed in two stages.","target_text":"As the two NE categories are the same, we can\u00e2\u0080\u0099t differentiate phrases with different orders of par ticipants \u00e2\u0080\u0093 whether the buying company or the to-be-bought company comes first.","label":0},{"source_text":"On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.","target_text":"Thus, the tree sets generated by HG's are similar to those of CFG's, with each node annotated by the operation (concatenation or wrapping) used to combine the headed strings derived by the daughters of Tree Adjoining Grammars, a tree rewriting formalism, was introduced by Joshi, Levy and Takahashi (1975) and Joshi (1983\/85).","label":0},{"source_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","target_text":"Here, we process only full-form words within the translation procedure.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"In Semitic languages the situation is very different.","label":0},{"source_text":"The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.","target_text":"For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"RandLM is the clear winner in RAM utilization, but is also slower and lower quality.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages.","label":0},{"source_text":"All the texts were annotated by two people.","target_text":"The Potsdam Commentary Corpus","label":0},{"source_text":"Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.","target_text":"Table 3 contains the results for evaluating our systems on the test set (section 22).","label":0},{"source_text":"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","target_text":"This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus.","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !","label":0},{"source_text":"This topic has been getting more attention, driven by the needs of various NLP applications.","target_text":"For example, the phrase \u00e2\u0080\u009c's New York-based trust unit,\u00e2\u0080\u009d is not a paraphrase of the other phrases in the \u00e2\u0080\u009cunit\u00e2\u0080\u009d set.","label":0},{"source_text":"The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.","target_text":"In this paper we study the problem of using a parallel corpus from a background domain (OUT) to improve performance on a target domain (IN) for which a smaller amount of parallel training material\u2014though adequate for reasonable performance\u2014is also available.","label":0},{"source_text":"The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).","target_text":"So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Approaches differ in the algorithms used for scoring and selecting the best path, as well as in the amount of contextual information used in the scoring process.","label":0},{"source_text":"The AdaBoost algorithm was developed for supervised learning.","target_text":"). context=x The context for the entity.","label":0},{"source_text":"The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.","target_text":"In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"We attain these results using several optimizations: hashing, custom lookup tables, bit-level packing, and state for left-to-right query patterns.","label":0},{"source_text":"Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.","target_text":"Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima\u2019an, 2007; Cohen and Smith, 2007).","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"Yarowsky-cautious does not separate the spelling and contextual features, but does have a limit on the number of rules added at each stage.","label":0},{"source_text":"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","target_text":"As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graph\u00e2\u0080\u0099s structures \u00e2\u0080\u0094 cf.","label":0},{"source_text":"They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.","target_text":"It can be shown that this objective is convex in q.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"so that 'door' would be and in this case the hanzi 7C, does not represent a syllable.","label":0},{"source_text":"All the texts were annotated by two people.","target_text":"Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, \u00e2\u0080\u009cflowing\u00e2\u0080\u009d way rather than with abrupt shifts of attention.","label":0},{"source_text":"They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.","target_text":"For graph propagation, the hyperparameter v was set to 2 x 10\u22126 and was not tuned.","label":0},{"source_text":"There is no global pruning.","target_text":"For the translation experiments, Eq. 2 is recursively evaluated.","label":0},{"source_text":"However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.","target_text":"These clusters are computed using an SVD variant without relying on transitional structure.","label":0},{"source_text":"BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.","target_text":"For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame.","label":0},{"source_text":"Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","target_text":"Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.","label":0},{"source_text":"An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.","target_text":"A detailed description of the search procedure used is given in this patent.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"com \u00c2\u00a7Cambridge, UK Email: nc201@eng.cam.ac.uk \u00c2\u00a9 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [\u00c2\u00a7] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries.","label":0},{"source_text":"The second algorithm builds on a boosting algorithm called AdaBoost.","target_text":"We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4.","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"For example, the two NEs \u00e2\u0080\u009cEastern Group Plc\u00e2\u0080\u009d and \u00e2\u0080\u009cHanson Plc\u00e2\u0080\u009d have the following contexts.","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.","label":0},{"source_text":"It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.","target_text":"In the numerator, however, the counts of ni1s are quite irregular, in\u00c2\u00ad cluding several zeros (e.g., RAT, none of whose members were seen).","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"A different knowledge source, called CFSemCFSem, compares the semantic expectations of the caseframe that extracts the anaphor with the semantic expectations of the caseframe that extracts the candidate.","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"The sentence length probability p(JjI) is omitted without any loss in performance.","label":0},{"source_text":"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","target_text":"In this paper we have argued that Chinese word segmentation can be modeled ef\u00c2\u00ad fectively using weighted finite-state transducers.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"Our evaluation includes both weighted and un- weighted lattices.","label":0},{"source_text":"Their results show that their high performance NER use less training data than other systems.","target_text":"The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1\u00e2\u0080\u00935 seems to be very hard to perform.","target_text":"The text type are editorials instead of speech transcripts.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"5 68.1 34.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"(3)In sentence (1), McCann can be a person or an orga nization.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"This is orthographically represented as 7C.","label":0},{"source_text":"Their results show that their high performance NER use less training data than other systems.","target_text":"If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"The effect of UNIQ is very small on both data sets.","label":0},{"source_text":"A beam search concept is applied as in speech recognition.","target_text":"This approach leads to a search procedure with complexity O(E3 J4).","label":0},{"source_text":"Here we present two algorithms.","target_text":"For example, take ..., says Maury Cooper, a vice president at S.&P.","label":0},{"source_text":"In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.","target_text":"We evaluated the results based on two metrics.","label":0},{"source_text":"The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.","target_text":"The relativizer f(\u201cthat\u201d) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries.","label":0},{"source_text":"An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.","target_text":"Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"We elected run Moses single-threaded to minimize the impact of RandLM\u2019s cache on memory use.","label":0},{"source_text":"The AdaBoost algorithm was developed for supervised learning.","target_text":"The algorithm builds two classifiers in parallel from labeled and unlabeled data.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"The segmentation chosen is the best path through the WFST, shown in (d).","label":0},{"source_text":"Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.","target_text":"Matsoukas et al (2009) generalize it by learning weights on sentence pairs that are used when estimating relative-frequency phrase-pair probabilities.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"). context=x The context for the entity.","label":0},{"source_text":"Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.","target_text":"Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.","label":0},{"source_text":"A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.","target_text":"). context=x The context for the entity.","label":0},{"source_text":"The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.","target_text":"This algorithm can be applied to statistical machine translation.","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5.","label":0},{"source_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","target_text":"Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"The code is opensource, has minimal dependencies, and offers both C++ and Java interfaces for integration.","label":1},{"source_text":"The AdaBoost algorithm was developed for supervised learning.","target_text":"(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.","label":0},{"source_text":"Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.","target_text":"Recent work by Finkel and Manning (2009) which re-casts Daum\u00b4e\u2019s approach in a hierarchical MAP framework may be applicable to this problem.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.","target_text":"This is not completely surprising, since all systems use very similar technology.","label":0},{"source_text":"The second algorithm builds on a boosting algorithm called AdaBoost.","target_text":"The weak hypothesis chosen was then restricted to be a predictor in favor of this label.","label":0},{"source_text":"Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.","target_text":"While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"The system described in this paper is similar to the MENE system of (Borthwick, 1999).","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.","target_text":"So, who won the competition?","label":0},{"source_text":"This topic has been getting more attention, driven by the needs of various NLP applications.","target_text":"Extract NE pair instances with contexts From the four years of newspaper corpus, we extracted 1.9 million pairs of NE instances.","label":0},{"source_text":"Replacing this with a ranked evaluation seems to be more suitable.","target_text":"One annotator suggested that this was the case for as much as 10% of our test sentences.","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"However, when we pre- tag the input\u00e2\u0080\u0094as is recommended for English\u00e2\u0080\u0094 we notice a 0.57% F1 improvement.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"12 One class of full personal names that this characterization does not cover are married women's names.","label":0},{"source_text":"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","target_text":"nan2gual 'pumpkin.'","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"Another approach to finding paraphrases is to find phrases which take similar subjects and objects in large corpora by using mutual information of word distribution [Lin and Pantel 01].","label":0},{"source_text":"They focused on phrases which two Named Entities, and proceed in two stages.","target_text":"So, we set a threshold that at least two examples are required to build a link.","label":0},{"source_text":"While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.","target_text":"It was also proposed to allow annotators to skip sentences that they are unable to judge.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3.","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.","label":0},{"source_text":"Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.","target_text":"The Grammar Our parser looks for the most likely tree spanning a single path through the lattice of which the yield is a sequence of lexemes.","label":0},{"source_text":"The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.","target_text":"(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !","label":0},{"source_text":"On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.","target_text":"If 7 is an elementary tree, the derivation tree consists of a single node labeled 7.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"The current work treats both segmental and super-segmental phenomena, yet we note that there may be more adequate ways to treat supersegmental phenomena assuming Word-Based morphology as we explore in (Tsarfaty and Goldberg, 2008).","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"This design leads to a significant reduction in the computational complexity of training and inference.","label":0},{"source_text":"Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.","target_text":"This property reflects an important aspect of the underlying linguistic theory associated with the formalism.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Since the segmentation corresponds to the sequence of words that has the lowest summed unigram cost, the segmenter under discussion here is a zeroth-order model.","label":0},{"source_text":"Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.","target_text":"Tsarfaty and Sima\u2019an (2007) have reported state-of-the-art results on Hebrew unlexicalized parsing (74.41%) albeit assuming oracle morphological segmentation.","label":0},{"source_text":"The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.","target_text":"The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.","target_text":"The other half was replaced by other participants, so we ended up with roughly the same number.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus.","label":0},{"source_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","target_text":"6.","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"Our second point of comparison is with Grac\u00c2\u00b8a et al.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"Then we ran binary search to determine the least amount of memory with which it would run.","label":0},{"source_text":"The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.","target_text":"Using a wide-coverage morphological analyzer based on (Itai et al., 2006) should cater for a better coverage, and incorporating lexical probabilities learned from a big (unannotated) corpus (cf.","label":0},{"source_text":"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","target_text":"gaolbu4-gaolxing4 (hap-not-happy) 'happy?'","label":0},{"source_text":"Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.","target_text":"Rather we believe several methods have to be developed using different heuristics to discover wider variety of paraphrases.","label":0},{"source_text":"In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.","target_text":"A graph D = (W, A) is well-formed iff it is acyclic and connected.","label":0},{"source_text":"This paper talks about Pseudo-Projective Dependency Parsing.","target_text":"The dependency graph in Figure 1 satisfies all the defining conditions above, but it fails to satisfy the condition ofprojectivity (Kahane et al., 1998): The arc connecting the head jedna (one) to the dependent Z (out-of) spans the token je (is), which is not dominated by jedna.","label":0},{"source_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","target_text":"Kollege.","label":0},{"source_text":"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.","target_text":"Section 2.1 describes how BABAR generates training examples to use in the learning process.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"There is a \u00e2\u0080\u0098core corpus\u00e2\u0080\u0099 of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.","label":1},{"source_text":"BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.","target_text":"3 These are not full case frames in the traditional sense, but they approximate a simple case frame with a single slot.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"In fact, we found that enabling IRSTLM\u2019s cache made it slightly slower, so results in Table 1 use IRSTLM without caching.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"The significant drop in number of pupils will begin in the fall of 2003.","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"1 53.8 47.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"The 1-bit sign is almost always negative and the 8-bit exponent is not fully used on the range of values, so in practice this corresponds to quantization ranging from 17 to 20 total bits.","label":0},{"source_text":"Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.","target_text":"Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"The text type are editorials instead of speech transcripts.","label":0},{"source_text":"Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.","target_text":"In the current work morphological analyses and lexical probabilities are derived from a small Treebank, which is by no means the best way to go.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"There are still some open issues to be resolved with the format, but it represents a first step.","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.","target_text":"Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 \u2022 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 \u2022upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 \u2022upv \u2022systran upcntt \u2022 rali upc-jmc \u2022 cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022upv -0.5 \u2022systran \u2022upv upc -jmc \u2022 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 \u2022 \u2022 \u2022 td t cc upc- \u2022 rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 \u2022upv -0.4 \u2022upv -0.3 In Domain \u2022upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain \u2022upc-jmc \u2022nrc \u2022ntt Adequacy upc-jmc \u2022 \u2022 \u2022lcc \u2022 rali \u2022 \u2022rali -0.7 -0.5 -0.6 \u2022upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 \u2022 \u2022rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt \u2022 upc-mr \u2022lcc \u2022utd \u2022upc-jg \u2022rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022upc-jmc \u2022 uedin-birch -0.5 -0.5 \u2022upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc \u2022 upc-jmc \u2022systran \u2022upv Fluency \u2022ula \u2022upc-mr \u2022lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 \u2022systran \u2022upv \u2022uedin-phi -jmc \u2022rali \u2022systran -0.3 -0.4 -0.5 -0.6 \u2022upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi \u2022 \u2022 \u2022utd \u2022upc-jmc \u2022upc-mr 0.4 \u2022rali -0.3 -0.4 -0.5 \u2022upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .","label":0},{"source_text":"It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.","target_text":"A non-optimal analysis is shown with dotted lines in the bottom frame.","label":0},{"source_text":"A beam search concept is applied as in speech recognition.","target_text":"Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"pre-processing.","label":0},{"source_text":"This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.","target_text":"For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"J:j:l :zhongl :0.0 ;m,Jlong4 :0.0 (mHHaryg9tltHBI) \u00c2\u00a3: _ADV: 5.98 \u00c2\u00a5 :hua2:o.o E :_NC: 4.41 :mln2:o.o mm : guo2 : 0.0 (RopubllcofChlna) .....,.","label":0},{"source_text":"They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.","target_text":"We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"40 75.","label":0},{"source_text":"Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.","target_text":"9 50.2 +P RI OR be st me dia n 47.","label":0},{"source_text":"This paper presents a maximum entropy-based named entity recognizer (NER).","target_text":"On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.","label":0},{"source_text":"Combining multiple highly-accurate independent parsers yields promising results.","target_text":"We call this approach parser switching.","label":0},{"source_text":"However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.","target_text":"The problem of coreference resolution has received considerable attention, including theoretical discourse models (e.g., (Grosz et al., 1995; Grosz and Sidner, 1998)), syntactic algorithms (e.g., (Hobbs, 1978; Lappin and Le- ass, 1994)), and supervised machine learning systems (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.","target_text":"For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"As noted in Section 4.4, disk cache state is controlled by reading the entire binary file before each test begins.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"Thus in a two-hanzi word like lflli?J zhong1guo2 (middle country) 'China' there are two syllables, and at the same time two morphemes.","label":0},{"source_text":"They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.","target_text":"The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set.","label":0},{"source_text":"Replacing this with a ranked evaluation seems to be more suitable.","target_text":"The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"constitute names, since we have only their segmentation, not the actual classification of the segmented words.","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"IRSTLM\u2019s quantized variant is the inspiration for our quantized variant.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"The P (T |\u00cf\u0088) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary.","label":0},{"source_text":"The resulting model is compact, efficiently learnable and linguistically expressive.","target_text":"Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac\u00c2\u00b8a et al.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d).","label":0},{"source_text":"This paper conducted research in the area of automatic paraphrase discovery.","target_text":"Finally, we find links between sets of phrases, based on the NE instance pair data (for example, different phrases which link \u00e2\u0080\u009cIBM\u00e2\u0080\u009d and \u00e2\u0080\u009cLotus\u00e2\u0080\u009d) (Step 4).","label":0},{"source_text":"From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an e\u00c3\u0086cient search algorithm.","target_text":"The translation direction is from German to English.","label":0},{"source_text":"The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.","target_text":"e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.","label":0},{"source_text":"The approach assumes that the word reordering is restricted to a few positions in the source sentence.","target_text":"This number must be less than or equal to n \u00f4\u0080\u0080\u0080 1.","label":0},{"source_text":"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.","target_text":"Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised.","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"Simple Type-Level Unsupervised POS Tagging","label":0},{"source_text":"While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.","target_text":"In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.","label":0},{"source_text":"There is no global pruning.","target_text":"The quasi-monotone search performs best in terms of both error rates mWER and SSER.","label":0},{"source_text":"There is no global pruning.","target_text":"vierten 12.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"Among these are words derived by various productive processes, including: 1.","label":0},{"source_text":"Here we present two algorithms.","target_text":"Having found (spelling, context) pairs in the parsed data, a number of features are extracted.","label":0},{"source_text":"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","target_text":"3An English sentence with ambiguous PoS assignment can be trivially represented as a lattice similar to our own, where every pair of consecutive nodes correspond to a word, and every possible PoS assignment for this word is a connecting arc.","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct \u00ef\u00bf\u00bd ?f iDafa.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"each word in the lexicon whether or not each string is actually an instance of the word in question.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"The improvement is due to the cost of bit-level reads and avoiding reads that may fall in different virtual memory pages.","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"Formally, for a lexicon L and segments I \u00e2\u0088\u0088 L, O \u00e2\u0088\u0088\/ L, each word automaton accepts the language I\u00e2\u0088\u0097(O + I)I\u00e2\u0088\u0097.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"But we also need an estimate of the probability for a non-occurring though possible plural form like i\u00c2\u00a5JJ1l.f, nan2gua1-men0 'pumpkins.'","label":0},{"source_text":"They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.","target_text":"Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable.","label":0},{"source_text":"Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.","target_text":"The subtree under,; is excised from 7, the tree 7' is inserted in its place and the excised subtree is inserted below the foot of y'.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"On several languages, we report performance exceeding that of more complex state-of-the art systems.1","label":0},{"source_text":"Human judges also pointed out difficulties with the evaluation of long sentences.","target_text":"It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation.","label":0},{"source_text":"Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.","target_text":"For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.","label":0},{"source_text":"In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.","target_text":"However, the accuracy is considerably higher than previously reported results for robust non-projective parsing of Czech, with a best performance of 73% UAS (Holan, 2004).","label":0},{"source_text":"This paper presents a maximum entropy-based named entity recognizer (NER).","target_text":"We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents.","label":0},{"source_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","target_text":"Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.","label":0},{"source_text":"This topic has been getting more attention, driven by the needs of various NLP applications.","target_text":"However, it is desirable if we can separate them.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"If \u00e2\u0080\u009cgun\u00e2\u0080\u009d and \u00e2\u0080\u009crevolver\u00e2\u0080\u009d refer to the same object, then it should also be acceptable to say that Fred was \u00e2\u0080\u009ckilled with a gun\u00e2\u0080\u009d and that the burglar \u00e2\u0080\u009cfireda revolver\u00e2\u0080\u009d.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":", for A. T.&T. nonalpha.. .","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.","label":0},{"source_text":"The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.","target_text":"For example, one of the ATB samples was the determiner -\"\" ; dhalik\u00e2\u0080\u009cthat.\u00e2\u0080\u009d The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent.","label":0},{"source_text":"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.","target_text":"At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.","label":0},{"source_text":"This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.","target_text":"One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set.","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.","label":0},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6.","label":0},{"source_text":"There are clustering approaches that assign a single POS tag to each word type.","target_text":"This assumption, however, is not inherent to type-based tagging models.","label":0},{"source_text":"It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.","target_text":"(See also Wu and Fung [1994].)","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"The theory has also been validated empirically.","label":0},{"source_text":"Two general approaches are presented and two combination techniques are described for each approach.","target_text":"All four of the techniques studied result in parsing systems that perform better than any previously reported.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"The problem can be represented as a graph with 2N vertices corresponding to the members of X1 and X2.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"9 65.5 46.","label":0},{"source_text":"Combining multiple highly-accurate independent parsers yields promising results.","target_text":"We model each parse as the decisions made to create it, and model those decisions as independent events.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"(b) F.i'JJI!","label":0},{"source_text":"The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.","target_text":"Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation.","label":0},{"source_text":"They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.","target_text":"The paper is structured as follows.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.","target_text":"We therefore also normalized judgements on a per-sentence basis.","label":0},{"source_text":"Human judges also pointed out difficulties with the evaluation of long sentences.","target_text":"Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts.","label":0},{"source_text":"Replacing this with a ranked evaluation seems to be more suitable.","target_text":"In words, the judgements are normalized, so that the average normalized judgement per judge is 3.","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"As we have said, parse quality decreases with sentence length.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"As the name implies, space is O(m) and linear in the number of entries.","label":0},{"source_text":"the \"Potsdam Commentary Corpus\" or PCC consists of 170 commentaries from Ma\u00a8rkische Allgemeine Zeitung, a German regional daily.","target_text":"A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not ac\u00c2\u00ad tually tag the words as belonging to one or another class of expression.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"The compressed variant uses block compression and is rather slow as a result.","label":0},{"source_text":"They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.","target_text":"Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus.","label":0},{"source_text":"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","target_text":"3.1 General Knowledge Sources.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"The returned state s(wn1) may then be used in a followon query p(wn+1js(wn1)) that extends the previous query by one word.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.","target_text":"Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.","label":0},{"source_text":"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","target_text":"(In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation).","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"(2006).","label":0},{"source_text":"It is well-known that English constituency parsing models do not generalize to other languages and treebanks.","target_text":"A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).","label":0},{"source_text":"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.","target_text":"This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Thus we have some confidence that our own performance is at least as good as that of Chang et al.","label":0},{"source_text":"This paper presents methods to query N-gram language models, minimizing time and space costs.","target_text":"We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and \u22120.0 for contexts that do not extend.","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"None of the models attach the attributive adjectives correctly.","label":0},{"source_text":"The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.","target_text":"Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.","target_text":"HR0011-06-C-0022.","label":0},{"source_text":"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.","target_text":"The PCC is not the result of a funded project.","label":0},{"source_text":"It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.","target_text":"While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example.","label":0},{"source_text":"The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.","target_text":"Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language.","label":0},{"source_text":"These clusters are computed using an SVD variant without relying on transitional structure.","target_text":"Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i).","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"Section 2.1 describes how BABAR generates training examples to use in the learning process.","label":0},{"source_text":"They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.","target_text":"The attachment in such cases encompasses a long distance dependency that cannot be captured by Markovian processes that are typically used for morphological disambiguation.","label":0},{"source_text":"The resulting model is compact, efficiently learnable and linguistically expressive.","target_text":"(2009).","label":0},{"source_text":"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.","target_text":"The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).","label":0},{"source_text":"They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.","target_text":"These estimates are in turn combined linearly with relative-frequency estimates from an in-domain phrase table.","label":0},{"source_text":"In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.","target_text":"If the same pair of NE instances is used with different phrases, these phrases are likely to be paraphrases.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"The structure uses linear probing hash tables and is designed for speed.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"We have used the Java-based opennlp maximum entropy package1.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"Timing is based on plentiful memory.","label":0},{"source_text":"However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.","target_text":"Given a document to process, BABAR uses four modules to perform coreference resolution.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"However, their system is a hybrid of hand-coded rules and machine learning methods.","label":0},{"source_text":"The approach assumes that the word reordering is restricted to a few positions in the source sentence.","target_text":"For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is su\u00c3\u0086cient to consider only the best 50 words.","label":0},{"source_text":"On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.","target_text":"An ATM may be thought of as spawning independent processes for each applicable move.","label":0},{"source_text":"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","target_text":"Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.","label":0},{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"About half of the participants of last year\u2019s shared task participated again.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).","label":0},{"source_text":"The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.","target_text":"Obviously \u00e2\u0080\u009cLotus\u00e2\u0080\u009d is part of the following clause rather than being the object of \u00e2\u0080\u009cestimates\u00e2\u0080\u009d and the extracted instance makes no sense.","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state.","label":0},{"source_text":"The texts were annotated with the RSTtool.","target_text":"They are also labelled for their topicality (yes \/ no), and this annotation is accompanied by a confidence value assigned by the annotator (since it is a more subjective matter).","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision.","label":0},{"source_text":"They proposed an unsupervised method to discover paraphrases from a large untagged corpus.","target_text":"Gather phrases using keywords Next, we select a keyword for each phrase \u00e2\u0080\u0093 the top-ranked word based on the TF\/IDF metric.","label":0},{"source_text":"BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.","target_text":"For natural disasters, BABAR generated 20,479 resolutions: 11,652 from lexical seeding and 8,827 from syntactic seeding.","label":0},{"source_text":"Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.","target_text":"Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.\u2019s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%).","label":0},{"source_text":"The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.","target_text":"Consequently, all three parsers prefer the nominal reading.","label":0},{"source_text":"Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.","target_text":"Second, we treat the projected labels as features in an unsupervised model (\u00a75), rather than using them directly for supervised training.","label":0},{"source_text":"An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.","target_text":"In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.","label":0},{"source_text":"It is probably the first analysis of Arabic parsing of this kind.","target_text":"The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.","label":0},{"source_text":"All the texts were annotated by two people.","target_text":"Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of students\u00e2\u0080\u0099 work in course projects, and to some extent of paid assistentships.","label":0},{"source_text":"They found replacing it with a ranked evaluation to be more suitable.","target_text":"We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"We are therefore applying a different method, which has been used at the 2005 DARPA\/NIST evaluation.","label":0},{"source_text":"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","target_text":"mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"For RandLM, we used the settings in the documentation: 8 bits per value and false positive probability 1 256.","label":0},{"source_text":"This paper conducted research in the area of automatic paraphrase discovery.","target_text":"All the NE pair instances which co-occur separated by at most 4 chunks are collected along with information about their NE types and the phrase between the NEs (the \u00e2\u0080\u0098context\u00e2\u0080\u0099).","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"(Webber et al., 2003)).","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"We further thank Dr. J.-S.","label":0},{"source_text":"Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.","target_text":"For statistics on this test set, refer to Figure 1.","label":0},{"source_text":"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","target_text":"Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline.","label":0},{"source_text":"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","target_text":"Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","label":0},{"source_text":"There are clustering approaches that assign a single POS tag to each word type.","target_text":"These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training.","label":0},{"source_text":"The manual evaluation of scoring translation on a graded scale from 1\u00e2\u0080\u00935 seems to be very hard to perform.","target_text":"About half of the participants of last year\u2019s shared task participated again.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"We group the features used into feature groups.","label":0},{"source_text":"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","target_text":"While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits.","label":0},{"source_text":"The approach has been successfully tested on the 8 000-word Verbmobil task.","target_text":"The computing time is low, since no reordering is carried out.","label":0},{"source_text":"On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.","target_text":"As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elementary tree where adjunctions occurred.","label":0},{"source_text":"Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.","target_text":"For verbs we add two features.","label":0},{"source_text":"Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.","target_text":"In our model, however, all lattice paths are taken to be a-priori equally likely.","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"The results in Table 2 were achieved on the development set.","label":0},{"source_text":"Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.","target_text":"State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers.","label":0},{"source_text":"They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","target_text":"The effect of UNIQ is very small on both data sets.","label":0},{"source_text":"The PROBING data structure uses linear probing hash tables and is designed for speed.","target_text":"Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully.","label":0},{"source_text":"All the texts were annotated by two people.","target_text":"The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced.","label":0},{"source_text":"In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.","target_text":"In the labeled version of these metrics (L) both heads and arc labels must be correct, while the unlabeled version (U) only considers heads.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.","label":0},{"source_text":"They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.","target_text":"Other work includes transferring latent topic distributions from source to target language for LM adaptation, (Tam et al., 2007) and adapting features at the sentence level to different categories of sentence (Finch and Sumita, 2008).","label":0},{"source_text":"However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.","target_text":"Ex: Mr. Cristiani is the president ...","label":0},{"source_text":"They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","target_text":"In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"In many cases these failures in recall would be fixed by having better estimates of the actual prob\u00c2\u00ad abilities of single-hanzi words, since our estimates are often inflated.","label":0},{"source_text":"Here we present two algorithms.","target_text":"We present two algorithms.","label":1},{"source_text":"This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.","target_text":"Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.'","label":0},{"source_text":"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","target_text":"It also does not prune, so comparing to our pruned model would be unfair.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules.","label":0},{"source_text":"Combining multiple highly-accurate independent parsers yields promising results.","target_text":"The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems.","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di\u00c2\u00ad graphernic word.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.","label":0},{"source_text":"Two general approaches are presented and two combination techniques are described for each approach.","target_text":"The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.","label":0},{"source_text":"In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.","target_text":"Wu and Fung introduce an evaluation method they call nk-blind.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"Table 2 shows these similarity measures.","label":0},{"source_text":"Nevertheless, only a part of this corpus (10 texts), which the authors name \"core corpus\", is annotated with all this information.","target_text":"When finished, the whole material is written into an XML-structured annotation file.","label":0},{"source_text":"Their results show that their high performance NER use less training data than other systems.","target_text":"As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.","label":0},{"source_text":"This paper talks about Unsupervised Models for Named Entity Classification.","target_text":"(3) shows learning curves for CoBoost.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words.","label":0},{"source_text":"Here both parametric and non-parametric models are explored.","target_text":"Call the crossing constituents A and B.","label":0},{"source_text":"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","target_text":"Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.","label":0},{"source_text":"Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.","target_text":"The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.","label":0},{"source_text":"A beam search concept is applied as in speech recognition.","target_text":"This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.","label":0},{"source_text":"The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.","target_text":"!!:\\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 \u00e2\u0080\u00a2 10.03 13...","label":0},{"source_text":"However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.","target_text":"Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate.","label":0},{"source_text":"Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.","target_text":"While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks.","label":0},{"source_text":"The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.","target_text":"However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.","label":0},{"source_text":"This paper talks about KenLM: Faster and Smaller Language Model Queries.","target_text":"We used this data to build an unpruned ARPA file with IRSTLM\u2019s improved-kneser-ney option and the default three pieces.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"For speed, we plan to implement the direct-mapped cache from BerkeleyLM.","label":0},{"source_text":"An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.","target_text":"The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.","label":0},{"source_text":"This paper presents a maximum entropy-based named entity recognizer (NER).","target_text":"Lexicon Feature: The string of the token is used as a feature.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"(2) was extended to have an additional, innermost loop over the (3) possible labels.","label":0},{"source_text":"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","target_text":"(7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples.","label":0},{"source_text":"Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.","target_text":"As discussed in more detail in \u00a73, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.","label":0},{"source_text":"Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.","target_text":"In the next section, we show how an ATM can accept the strings generated by a grammar in a LCFRS formalism in logspace, and hence show that each family can be recognized in polynomial time.","label":0},{"source_text":"Here we present two algorithms.","target_text":"The learning task is to find two classifiers : 2x1 { \u20141, +1} 12 : 2x2 { \u20141, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.","label":0},{"source_text":"they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.","target_text":"18 We are grateful to ChaoHuang Chang for providing us with this set.","label":0},{"source_text":"The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.","target_text":"Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy.","label":0},{"source_text":"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","target_text":"(Kehler, 1997) also used a DempsterShafer model to merge evidence from different sources for template-level coreference.","label":0},{"source_text":"Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.","target_text":"The type-level posterior term can be computed according to, P (Ti|W , T \u00e2\u0088\u0092i, \u00ce\u00b2) \u00e2\u0088\u009d Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM.","label":0},{"source_text":"The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).","target_text":"The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.","label":0},{"source_text":"They have made use of local and global features to deal with the instances of same token in a document.","target_text":"needs to be in initCaps to be considered for this feature.","label":0},{"source_text":"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","target_text":"Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries.","label":0},{"source_text":"Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.","target_text":"The evaluation framework for the shared task is similar to the one used in last year\u2019s shared task.","label":0},{"source_text":"This assumption, however, is not inherent to type-based tagging models.","target_text":"Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.","label":0},{"source_text":"The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.","target_text":"to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(\u00e2\u0088\u0092i), w, \u00ce\u00b1) \u00e2\u0088\u009d n P (w|Ti, t(\u00e2\u0088\u0092i), w(\u00e2\u0088\u0092i), \u00ce\u00b1) (tb ,ta ) P (Ti, t(i)|T , W , t(\u00e2\u0088\u0092i), w, \u00ce\u00b1, \u00ce\u00b2) = P (T |tb, t(\u00e2\u0088\u0092i), \u00ce\u00b1)P (ta|T , t(\u00e2\u0088\u0092i), \u00ce\u00b1) \u00e2\u0088\u0092i (i) i i (\u00e2\u0088\u0092i) P (Ti|W , T \u00e2\u0088\u0092i, \u00ce\u00b2)P (t |Ti, t , w, \u00ce\u00b1) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(\u00e2\u0088\u0092i)where T \u00e2\u0088\u0092i denotes all type-level tag assignment ex cept Ti and t(\u00e2\u0088\u0092i) denotes all token-level tags except and w (\u00e2\u0088\u0092i) (Johnson, 2007).","label":0},{"source_text":"Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.","target_text":"Recall.","label":0},{"source_text":"The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.","target_text":"A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags.","label":0},{"source_text":"The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.","target_text":"\"c' 0 + 0 \"0 ' \u00e2\u0080\u00a2 + a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y \u00e2\u0080\u00a2 Taiwan 0 \u00c2\u00b7;; 0 c CD E i5 0\"' 9 9 \u00e2\u0080\u00a2 Mainland \u00e2\u0080\u00a2 \u00e2\u0080\u00a2 \u00e2\u0080\u00a2 \u00e2\u0080\u00a2 -0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions.","label":0},{"source_text":"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.","target_text":"If either case is true, then CFLex reports that the anaphor and candidate might be coreferent.","label":0},{"source_text":"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","target_text":"We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1