source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
(b) does the translation have the same meaning, including connotations?
|
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
| 0 |
One key issue here is to seek a discourse-based model of information structure.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
42 nator, the N31s can be measured well by counting, and we replace the expectation by the observation.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
This is the form of recursive levels in iDafa constructs.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
The iw all map variant uses a non-0 y weight on a uniform prior in p,,(s t), and outperforms a version with y = 0 (iw all) and the “flattened” variant described in section 3.2.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
4.1 Corpora.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
1 2 3.
|
The corpus was annoted with different linguitic information.
| 0 |
(Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST).
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
(S2) The burglar fired the gun three times and fled.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Generalizing state minimization, the model could also provide explicit bounds on probability for both backward and forward extension.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
The availability of comparable corpora is limited, which is a significant limitation on the approach.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
The taggers were trained on datasets labeled with the universal tags.
|
The corpus was annoted with different linguitic information.
| 0 |
We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
The same form fmnh can be segmented as f-mnh, f (“that”) functioning as a reletivizer with the form mnh.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
How should the absence of vowels and syntactic markers influence annotation choices and grammar development?
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
(c) After they blindfolded the men...
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Limitations There are several limitations in the methods.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
The contextual role knowledge that BABAR uses for coreference resolution is derived from this caseframe data.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003).
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
As illustrated by MCTAG's, it is possible for a formalism to give tree sets with bounded dependent paths while still sharing the constrained rewriting properties of CFG's, HG's, and TAG's.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
In order to create an IE system for a new domain, one has to spend a long time to create the knowledge.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
2 62.2 39.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
Initially, the DempsterShafer model assumes that all hypotheses are equally likely, so it creates a set called θ that includes all hypotheses.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
TPT Germann et al. (2009) describe tries with better locality properties, but did not release code.
|
Here we present two algorithms.
| 0 |
We first define "pseudo-labels",-yt, as follows: = Yi t sign(g 0\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Evaluation within a set The evaluation of paraphrases within a set of phrases which share a keyword is illustrated in Figure 4.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
From now on all lattice arcs are tagged segments and the assignment of probability P(p —* (s, p)) to lattice arcs proceeds as usual.4 A rather pathological case is when our lexical heuristics prune away all segmentation possibilities and we remain with an empty lattice.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause).
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
If two systems’ scores are close, this may simply be a random effect in the test data.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
In our model, we associate these features at the type-level in the lexicon.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
The goal of machine translation is the translation of a text given in some source language into a target language.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
We use the log-linear tagger of Toutanova et al.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Unigram records store probability, backoff, and an index in the bigram table.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
7).
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply âArabicâ) because of the unusual opportunity it presents for comparison to English parsing results.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
For all languages we do not make use of a tagging dictionary.
|
A beam search concept is applied as in speech recognition.
| 0 |
Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
The first 3770 trees of the resulting set then were used for training, and the last 418 are used testing.
|
A beam search concept is applied as in speech recognition.
| 0 |
The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment.
|
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
| 1 |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective treebanks is often neglected because of the relative scarcity of problematic constructions.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
The first m pairs have labels yi, whereas for i = m + 1, , n the pairs are unlabeled.
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Finally, we wish to reiterate an important point.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
This left 962 examples, of which 85 were noise.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
For all lists except locations, the lists are processed into a list of tokens (unigrams).
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï).
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Figure 2: An ATB sample from the human evaluation.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
The learned information was recycled back into the resolver to improve its performance.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
|
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
| 0 |
This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Nodes in the trie are based on arrays sorted by vocabulary identifier.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Only 2 link in the CC- domain (buy-purchase, acquire-acquisition) and 2 links (trader-dealer and head-chief) in the PC- domain are found in the same synset of Word- Net 2.1 (http://wordnet.princeton.edu/).
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Here we do not submit to this view.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Both of these analyses are shown in Figure 4; fortunately, the correct analysis is also the one with the lowest cost, so it is this analysis that is chosen.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
We substantially outperform all of them on query speed and offer lower memory consumption than lossless alternatives.
|
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
| 0 |
In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
|
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
| 0 |
For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
In.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Cohen and Smith approach this by introducing the α hyperparameter, which performs best when optimized independently for each sentence (cf.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
We also thank ChaoHuang Chang, reviewers for the 1994 ACL conference, and four anonymous reviewers for Computational Linguistics for useful comments.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
evaluated to account for the same fraction of the data.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Once HMM parameters (θ, Ï) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from Ï.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
2.2.3 Lexical Caseframe Expectations The second type of contextual role knowledge learned by BABAR is Lexical Caseframe Expectations, which are used by the CFLex knowledge source.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Previous Work.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
(c) After they blindfolded the men...
|
A beam search concept is applied as in speech recognition.
| 0 |
SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
The developers aimed to reduce memory consumption at the expense of time.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Lexicon and OOV Handling Our data-driven morphological-analyzer proposes analyses for unknown tokens as described in Section 5.
|
The corpus was annoted with different linguitic information.
| 0 |
2.3 Rhetorical structure.
|
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
| 0 |
Log-linear combination (loglin) improves on this in all cases, and also beats the pure IN system.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
We computed BLEU scores for each submission with a single reference translation.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
Our clue is the NE instance pairs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.