Datasets:
target_text
stringlengths 1
5.38k
| source_text
stringlengths 27
368
| label
int64 0
1
|
---|---|---|
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy . | They found replacing it with a ranked evaluation to be more suitable. | 0 |
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007). | Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 |
A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered? | The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 |
This is not an oversight. | One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 |
If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up. | In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 |
The correct ambiguity resolution of the syntactic level therefore helps to resolve the morphological one, and vice versa. | They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 |
RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures. | This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 |
10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to. | In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 |
2.5 Connectives with scopes. | The corpus was annoted with different linguitic information. | 0 |
(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced. | NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 |
2.1 Reliable Case Resolutions. | In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 |
Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision. | The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 |
Figure 3 shows examples of semantic expectations that were learned. | Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 |
To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task. | The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 |
This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure. | In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 |
We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2. | The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 |
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each. | The resulting model is compact, efficiently learnable and linguistically expressive. | 0 |
Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase. | The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 |
Figure 2 shows examples of extracted NE pair instances and their contexts. | In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 |
The probability distribution that satisfies the above property is the one with the highest entropy. | NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 |
Taking only the highest frequency rules is much "safer", as they tend to be very accurate. | The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 |
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. | These clusters are computed using an SVD variant without relying on transitional structure. | 0 |
In this domain the major scenarios involve the things they agreed on, rather than the mere fact that they agreed. | This paper conducted research in the area of automatic paraphrase discovery. | 0 |
We are very grateful to Tony Kroc.h, Michael Pails, Sunil Shende, and Mark Steedman for valuable discussions. formalisms. | Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 |
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge). | Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 |
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric. | In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 |
Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models’ performance against previously reported results. | They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 |
an event. | Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 |
This is orthographically represented as 7C. | In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 |
The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period. | We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 |
This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart. | This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 |
This family represents an attempt to generalize the properties shared by CFG's, HG's, TAG's, and MCTAG's. | On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 |
Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word. | The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 |
This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis. | They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 |
The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993). | This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 |
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007). | In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 |
Hence we decided to select ten commentaries to form a âcore corpusâ, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence. | Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 |
Diacritics can also be used to specify grammatical relations such as case and gender. | This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 |
This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1). | This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 |
9 61.0 44. | Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 |
Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset. | Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 |
Reading the following record’s offset indicates where the block ends. | This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 |
On the other hand, we are interested in the application of rhetorical analysis or âdiscourse parsingâ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5). | Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 |
Gazdar (1985) argues this is the appropriate analysis of unbounded dependencies in the hypothetical Scandinavian language Norwedish. | Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 |
Using this encoding scheme, the arc from je to Z in Figure 2 would be assigned the label AuxP↑Sb (signifying an AuxP that has been lifted from a Sb). | The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 |
A search restriction especially useful for the translation direction from German to English is presented. | The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 |
If either case is true, then CFLex reports that the anaphor and candidate might be coreferent. | Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 |
Word Re-ordering and DP-based Search in Statistical Machine Translation | The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 |
was done by the participants. | The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 |
The goal of our research was to explore the use of contextual role knowledge for coreference resolution. | BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 |
Performance typically stabilizes across languages after only a few number of iterations. | However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 |
In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank. | In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 |
The trie data structure is commonly used for language modeling. | This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 |
BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters. | Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 |
For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1. | Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 |
The result of this is shown in Figure 7. | In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 |
The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either à = 0 or à = 1 new target words. | A beam search concept is applied as in speech recognition. | 0 |
+ cost(unseen(fm, as desired. | In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 |
Further, it needs extra pointers in the trie, increasing model size by 40%. | The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 |
β is the shared hyperparameter for the tag assignment prior and word feature multinomials. | However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 |
This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity. | The resulting model is compact, efficiently learnable and linguistically expressive. | 0 |
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair. | Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 |
irL as the product of the probability estimate for i¥JJ1l., and the probability estimate just derived for unseen plurals in ir,: p(i¥1J1l.ir,) p(i¥1J1l.)p(unseen(f,)). | Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 |
There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993). | they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 |
29 â 95. | It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 |
Evaluation results within sets Table 1 shows the evaluation result based on the number of phrases in a set. | They focused on phrases which two Named Entities, and proceed in two stages. | 0 |
We extend the Matsoukas et al approach in several ways. | In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 |
(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case. | The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 |
Table 3 shows BABARâs performance when the four contextual role knowledge sources are added. | Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 |
Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (Ï, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|Ï, θ) = P (T , W , θ, Ï, Ï, t, w|α, β) = P (T , W , Ï|β) [Lexicon]  n n ï£ (w,t)â(w,t) j  P (tj |Ïtjâ1 )P (wj |tj , θtj ) P (Ï, θ|T , α, β) [Parameter] P (w, t|Ï, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively. | In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 |
This combination generalizes (2) and (3): we use either at = a to obtain a fixed-weight linear combination, or at = cI(t)/(cI(t) + 0) to obtain a MAP combination. | They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 |
Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system. | This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 |
The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance. | Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 |
Manual and Automatic Evaluation of Machine Translation between European Languages | The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 |
Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially. | The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 |
Until now, all evaluations of Arabic parsingâincluding the experiments in the previous sectionâhave assumed gold segmentation. | The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 |
2 61.7 64. | In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 |
Further, the probing hash table does only one random lookup per query, explaining why it is faster on large data. | This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 |
If âgunâ and ârevolverâ refer to the same object, then it should also be acceptable to say that Fred was âkilled with a gunâ and that the burglar âfireda revolverâ. | Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 |
0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt | The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 |
Performance typically stabilizes across languages after only a few number of iterations. | These clusters are computed using an SVD variant without relying on transitional structure. | 0 |
Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004). | This corpus has several advantages: it is annotated at different levels. | 0 |
The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle). | It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 |
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. | This assumption, however, is not inherent to type-based tagging models. | 0 |
The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem. | A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 |
We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes. | It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 |
The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function. | We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 |
This approach makes the training objective more complex by adding linear constraints proportional to the number of word types, which is rather prohibitive. | The resulting model is compact, efficiently learnable and linguistically expressive. | 0 |
The inverted alignment probability p(bijbiô1; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration. | An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 |
Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name. | they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 |
The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences. | Combining multiple highly-accurate independent parsers yields promising results. | 0 |
Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found. | Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 |
A few annotators suggested to break up long sentences into clauses and evaluate these separately. | While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 |
Although the best published results for the Collins parser is 80% UAS (Collins, 1999), this parser reaches 82% when trained on the entire training data set, and an adapted version of Charniak’s parser (Charniak, 2000) performs at 84% (Jan Hajiˇc, pers. comm.). | This paper talks about Pseudo-Projective Dependency Parsing. | 0 |
This is orthographically represented as 7C. | The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 |
The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced. | the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 |
Context from the whole document can be important in classifying a named entity. | Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 |
The Input The set of analyses for a token is thus represented as a lattice in which every arc corresponds to a specific lexeme l, as shown in Figure 1. | They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 |
Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER). | The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 |
3.5 Improved models of discourse. | All the texts were annotated by two people. | 0 |
End of preview. Expand
in Dataset Viewer.
Dataset Card for backsum
Licensing
This dataset was derived from the Scisumm Corpus.
If you use this data, please cite the original CL-SciSumm overview paper:
@inproceedings{
title = {Overview and Results: CL-SciSumm Shared Task 2019},
author = {Chandrasekaran, Muthu Kumar and Yasunaga, Michihiro and Radev, Dragomir and Freitag, Dayne and Kan, Min-Yen},
year = 2019,
booktitle = {In Proceedings of Joint Workshop on Bibliometric-enhanced Information Retrieval and NLP for Digital Libraries (BIRNDL 2019)}
}
- Downloads last month
- 31