source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
(Thus the domain of the dev and test corpora matches IN.)
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
This overview is illustrated in Figure 1.
Replacing this with a ranked evaluation seems to be more suitable.
0
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
One is that smaller sets sometime have meaningless keywords, like “strength” or “add” in the CC-domain, or “compare” in the PC-domain.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
VBD she added VP PUNC “ SBAR IN NP 0 NN.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
4.3 Translation Experiments.
There is no global pruning.
0
For Æ = 1, a new target language word is generated using the trigram language model p(eje0; e00).
Two general approaches are presented and two combination techniques are described for each approach.
0
We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
As can be seen, GR and this "pared-down" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Figure 1 reveals that an event that “damaged” objects may also cause injuries; a disaster that “occurred” may be investigated to find its “cause”; a disaster may “wreak” havoc as it “crosses” geographic regions; and vehicles that have a “driver” may also “carry” items.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Our system does not currently make use of titles, but it would be straightforward to do so within the finite-state framework that we propose.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010).
This paper talks about Unsupervised Models for Named Entity Classification.
0
At first glance, the problem seems quite complex: a large number of rules is needed to cover the domain, suggesting that a large number of labeled examples is required to train an accurate classifier.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Almost all annotators expressed their preference to move to a ranking-based evaluation in the future.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Although these are technically nominal, they have become known as “equational” sentences.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Lack of correct reference translations was pointed out as a short-coming of our evaluation.
There are clustering approaches that assign a single POS tag to each word type.
0
The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We have checked if there are similar verbs in other major domains, but this was the only one.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Both parameters depend on a single hyperparameter α.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This suggests a direct parallel to (1): where ˜p(s, t) is a joint empirical distribution extracted from the IN dev set using the standard procedure.2 An alternative form of linear combination is a maximum a posteriori (MAP) combination (Bacchiani et al., 2004).
Replacing this with a ranked evaluation seems to be more suitable.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Evaluation within a set The evaluation of paraphrases within a set of phrases which share a keyword is illustrated in Figure 4.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Find keywords for each NE pair The keywords are found for each NE category pair.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
3.2 Reordering with IBM Style.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.
There is no global pruning.
0
Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We showed in §2 that lexical ambiguity explains the underperformance of these categories.
Replacing this with a ranked evaluation seems to be more suitable.
0
Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Overview of the method 2.2 Step by Step Algorithm.
Combining multiple highly-accurate independent parsers yields promising results.
0
These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.
Replacing this with a ranked evaluation seems to be more suitable.
0
Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
As in the case of the derivation trees of CFG's, nodes are labeled by a member of some finite set of symbols (perhaps only implicit in the grammar as in TAG's) used to denote derived structures.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We map the ATB morphological analyses to the shortened “Bies” tags for all experiments.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis­ tance matrix, and plotting the first two most significant dimensions.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed.
This paper talks about Unsupervised Models for Named Entity Classification.
0
They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Its only purpose is 3 This follows since each θt has St − 1 parameters and.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In this section, we will explain the algorithm step by step with examples.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Training and testing is based on the Europarl corpus.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Two issues distinguish the various proposals.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We use w erations of sampling (see Figure 2 for a depiction).
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
We extend the Matsoukas et al approach in several ways.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
7 68.3 56.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We thank Felix Hageloh (Hageloh, 2006) for providing us with this version. proposed in (Tsarfaty, 2006).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.
This corpus has several advantages: it is annotated at different levels.
0
The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
2.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures.
It is probably the first analysis of Arabic parsing of this kind.
0
When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
So, who won the competition?
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
We obtained positive results using a very simple phrase-based system in two different adaptation settings: using English/French Europarl to improve a performance on a small, specialized medical domain; and using non-news portions of the NIST09 training material to improve performance on the news-related corpora.
They focused on phrases which two Named Entities, and proceed in two stages.
0
x EG, has agreed to be bought by H x EG, now owned by H x H to acquire EG x H’s agreement to buy EG Three of those phrases are actually paraphrases, but sometime there could be some noise; such as the second phrase above.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
(2010)’s richest model: optimized via either EM or LBFGS, as their relative performance depends on the language.
Here we present two algorithms.
0
The normalization factor plays an important role in the AdaBoost algorithm.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The natural baseline (baseline) outperforms the pure IN system only for EMEA/EP fren.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add ∗n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.
The corpus was annoted with different linguitic information.
0
Hence we decided to select ten commentaries to form a ‘core corpus’, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed; see Pereira, Riley, and Sproat (1994) for an explanation of the notion of summing WFSTs.12 Conceptual Improvements over Chang et al.'s Model.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
We extend Subramanya et al.’s intuitions to our bilingual setup.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers.
They have made use of local and global features to deal with the instances of same token in a document.
0
It uses a maximum entropy framework and classifies each word given its features.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Graph construction does not require any labeled data, but makes use of two similarity functions.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The details are given in (Och and Ney, 2000).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(See also Wu and Fung [1994].)
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
hanzi in the various name positions, derived from a million names.