source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
2.
A beam search concept is applied as in speech recognition.
0
The details are given in (Tillmann, 2000).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # ” , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ ” . +F EA TS JJ NN S CD NN P UH , PR P$ # . “ Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting.
Replacing this with a ranked evaluation seems to be more suitable.
0
We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Each word is simply tagged with the semantic classes corresponding to all of its senses.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
This highly effective approach is not directly applicable to the multinomial models used for core SMT components, which have no natural method for combining split features, so we rely on an instance-weighting approach (Jiang and Zhai, 2007) to downweight domain-specific examples in OUT.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
7 www.cis.upenn.edu/∼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.
They focused on phrases which two Named Entities, and proceed in two stages.
0
For each set, the phrases with bracketed frequencies are considered not paraphrases in the set.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
, December, then the feature MonthName is set to 1.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
1 55.8 38.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.'
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Typical data structures are generalized Bloom filters that guarantee a customizable probability of returning the correct answer.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
3.2 Reordering with IBM Style.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Keywords with more than one word In the evaluation, we explained that “chairman” and “vice chairman” are considered paraphrases.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Such a classification can be seen as a not-always-correct summary of global features.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
The same form fmnh can be segmented as f-mnh, f (“that”) functioning as a reletivizer with the form mnh.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The AdaBoost algorithm was developed for supervised learning.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
68 95.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
We assume that the goal in dependency parsing is to construct a labeled dependency graph of the kind depicted in Figure 1.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The first 3770 trees of the resulting set then were used for training, and the last 418 are used testing.
They have made use of local and global features to deal with the instances of same token in a document.
0
For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
In.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
This is an iterative method that improves the estimation of the parameters at each iteration.
These clusters are computed using an SVD variant without relying on transitional structure.
0
La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1.
This paper talks about Pseudo-Projective Dependency Parsing.
0
However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The German finite verbs 'bin' (second example) and 'konnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
diesem 3.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We consider the unsupervised POS induction problem without the use of a tagging dictionary.
There is no global pruning.
0
During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.
This paper conducted research in the area of automatic paraphrase discovery.
0
Because of this threshold, very few NE instance pairs could be used and hence the variety of phrases was also limited.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The paper is structured as follows.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
See Section 5.
Their results show that their high performance NER use less training data than other systems.
0
Bikel et al.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
We suspect that the higher precision in the disasters domain may be due to its substantially larger training corpus.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The other half was replaced by other participants, so we ended up with roughly the same number.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
A contextual role represents the role that a noun phrase plays in an event or relationship.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
—ested in correlations between prosody and dis course structure.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Using RandLM and the documented settings (8-bit values and 1 256 false-positive probability), we built a stupid backoff model on the same data as in Section 5.2.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Particular instances of relations are associated with goodness scores.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
However, TRIE partitions storage by n-gram length, so walking the trie reads N disjoint pages.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
These derivation trees show how the composition operations were used to derive the final structures from elementary structures.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In this case, we have no finite-state restrictions for the search space.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Of course, we.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.
The texts were annotated with the RSTtool.
0
We use MMAX for this annotation as well.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The ability to redistribute belief values across sets rather than individual hypotheses is key.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative IN sentences as queries, then pooling the match results.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Hence, their relationship to formalisms such as HG's and TAG's is of interest.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
kann 7.nicht 8.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
For simplicity, we assume that OUT is homogeneous.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Two measures that can be used to compare judgments are: 1.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.)
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
72 78.
They found replacing it with a ranked evaluation to be more suitable.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The relevance of the distinction between, say, phonological words and, say, dictionary words is shown by an example like rpftl_A :;!:Hfllil zhong1hua2 ren2min2 gong4he2-guo2 (China people republic) 'People's Republic of China.'
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
3.1 General Knowledge Sources.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For the disasters domain, 8245 texts were used for training and the 40 test documents contained 447 anaphoric links.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
0 X u} "' o; .2 X X><X X XX X X X X X X x X X X X X x X V X X X X .;t'*- XXX:OX X X X X X X 9 x X X XX XX X X X X X X X XXX:< X X>O<XX>!KXX XI<>< »C X X XX :X: X X "' X X XX >OO<X>D<XIK X X X X X X --XX»: XXX X X»C X X«X...C:XXX X Xll< X X ><XX>IIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R"2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns.
Combining multiple highly-accurate independent parsers yields promising results.
0
We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.
Here we present two algorithms.
0
(5) and ht into Equ.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
We have presented a new method for non-projective dependency parsing, based on a combination of data-driven projective dependency parsing and graph transformation techniques.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
am 11.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Bikel et al.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
1
In this paper we have proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
Their results show that their high performance NER use less training data than other systems.
0
Sentence (2) and (3) help to disambiguate one way or the other.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The approach has been successfully tested on the 8 000-word Verbmobil task.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We use a patched version of BitPar allowing for direct input of probabilities instead of counts.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Instead of the names of elementary trees of a TAG, the nodes are labeled by a sequence of names of trees in an elementary tree set.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The state in future has not enough work for its many teachers.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Step 4.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
T e r r o r i s m Ca sef ra me Semantic Classes <a ge nt > ass ass ina ted group, human inv esti gat ion int o < N P> event exp lod ed out sid e < N P> building N a t u r a l D i s a s t e r s Ca sef ra me Semantic Classes <a ge nt > inv esti gat ing cau se group, human sur viv or of < N P> event, natphenom hit wit h < N P> attribute, natphenom Figure 3: Semantic Caseframe Expectations Figure 2: Lexical Caseframe Expectations To illustrate how lexical expectations are used, suppose we want to determine whether noun phrase X is the antecedent for noun phrase Y. If they are coreferent, then X and Y should be substitutable for one another in the story.4 Consider these sentences: (S1) Fred was killed by a masked man with a revolver.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This allow the learners to &quot;bootstrap&quot; each other by filling the labels of the instances on which the other side has abstained so far.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The weight on each sentence is a value in [0, 1] computed by a perceptron with Boolean features that indicate collection and genre membership.
It is probably the first analysis of Arabic parsing of this kind.
0
In MSA, SVO usually appears in non-matrix clauses.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
English was again paired with German, French, and Spanish.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., &quot;speedometer&quot; from &quot;car&quot;) from a large corpus using hand-crafted patterns.