source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
lla/llb and 14a/14b respectively).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
2.3 Rhetorical structure.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
A different knowledge source, called CFSemCFSem, compares the semantic expectations of the caseframe that extracts the anaphor with the semantic expectations of the caseframe that extracts the candidate.
This corpus has several advantages: it is annotated at different levels.
0
For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (‘Sonderforschungsbereich’) revolving around 9 2.7 Information structure.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Better Arabic Parsing: Baselines, Evaluations, and Analysis
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
On average, 6 reference translations per automatic translation are available.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
IRST is not threadsafe.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Limitations There are several limitations in the methods.
Their results show that their high performance NER use less training data than other systems.
0
It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
We have already mentioned the closely related work by Matsoukas et al (2009) on discriminative corpus weighting, and Jiang and Zhai (2007) on (nondiscriminative) instance weighting.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
About half of the participants of last year’s shared task participated again.
This assumption, however, is not inherent to type-based tagging models.
0
Our analysis identifies three key factors driving our performance gain: 1) selecting a model structure which directly encodes tag sparsity, 2) a type-level prior on tag assignments, and 3) a straightforward na¨ıveBayes approach to incorporate features.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively.
Their results show that their high performance NER use less training data than other systems.
0
Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The domain is general politics, economics and science.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Furthermore, by inverting the transducer so that it maps from phonemic transcriptions to hanzi sequences, one can apply the segmenter to other problems, such as speech recognition (Pereira, Riley, and Sproat 1994).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
The aforementioned surface form bcl, for example, may also stand for the lexical item “onion”, a Noun.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
1
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
Their results show that their high performance NER use less training data than other systems.
1
These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Compared to decoding, this task is cache-unfriendly in that repeated queries happen only as they naturally occur in text.
This paper talks about Pseudo-Projective Dependency Parsing.
0
We have presented a new method for non-projective dependency parsing, based on a combination of data-driven projective dependency parsing and graph transformation techniques.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
.., Wn ) (obs) P T : Tag assigns (T1 ,.
There are clustering approaches that assign a single POS tag to each word type.
0
A promising direction for future work is to explicitly model a distribution over tags for each word type.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The Verbmobil task is an appointment scheduling task.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We report the F1 value of both measures.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The problem is to store these two values for a large and sparse set of n-grams in a way that makes queries efficient.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
4 Evaluation Results.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
In total there are O(K 2) parameters associated with the transition parameters.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).
It is probably the first analysis of Arabic parsing of this kind.
0
We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
SRILM’s compact variant has an incredibly expensive destructor, dwarfing the time it takes to perform translation, and so we also modified Moses to avoiding the destructor by calling exit instead of returning normally.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
If they are found in a list, then a feature for that list will be set to 1.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Proper names that match are resolved with each other.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Based on revision 4041, we modified Moses to print process statistics before terminating.
Two general approaches are presented and two combination techniques are described for each approach.
0
Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.
A beam search concept is applied as in speech recognition.
0
= p(fj je) max Æ;e00 j02Cnfjg np(jjj0; J) p(Æ) pÆ(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Gazdar (1985) argues that sharing of stacks can be used to give analyses for coordination.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Thus, for example, one successor process will be have M to be in the existential state qa with the indices encoding xi , , xn, in the first 2n i tapes.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted.
This corpus has several advantages: it is annotated at different levels.
0
Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
2 61.7 64.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
However, MADA is language-specific and relies on manually constructed dictionaries.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
There are clustering approaches that assign a single POS tag to each word type.
0
The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Still, from a theoretical point of view, projective parsing of non-projective structures has the drawback that it rules out perfect accuracy even as an asymptotic goal.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
1
Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).
The AdaBoost algorithm was developed for supervised learning.
0
AdaBoost is given access to a weak learning algorithm, which accepts as input the training examples, along with a distribution over the instances.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data.
There are clustering approaches that assign a single POS tag to each word type.
0
The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |ψ).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, this result is consistent with the results of ex­ periments discussed in Wu and Fung (1994).
This paper conducted research in the area of automatic paraphrase discovery.
0
The procedure using the tagged sentences to discover paraphrases takes about one hour on a 2GHz Pentium 4 PC with 1GB of memory.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
We call this approach parser switching.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Since the transducers are built from human-readable descriptions using a lexical toolkit (Sproat 1995), the system is easily maintained and extended.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
3.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
The phrases have to be the expressions of length less than 5 chunks, appear between two NEs.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
For robustness reasons, the parser may output a set of dependency trees instead of a single tree. most dependent of the next input token, dependency type features are limited to tokens on the stack.
Here both parametric and non-parametric models are explored.
0
We are interested in combining the substructures of the input parses to produce a better parse.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
A receives a votes, and B receives b votes.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
All four of the techniques studied result in parsing systems that perform better than any previously reported.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For terrorism, BABAR generated 5,078 resolutions: 2,386 from lexical seeding and 2,692 from syntactic seeding.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
We settled on contrastive evaluations of 5 system outputs for a single test sentence.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We again assume a training set of n examples {x1 . xri} where the first m examples have labels {y1 ... yin}, and the last (n — m) examples are unlabeled.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Given an anaphor and candidate, BABAR checks (1) whether the semantic classes of the anaphor intersect with the semantic expectations of the caseframe that extracts the candidate, and (2) whether the semantic classes of the candidate intersect with the semantic ex pectations of the caseframe that extracts the anaphor.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Finally, we add “DT” to the tags for definite nouns and adjectives (Kulick et al., 2006).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
!!:\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 • 10.03 13...
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
“The gun” will be extracted by the caseframe “fired <patient>”.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
For instance, the common "suffixes," -nia (e.g.,.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).