source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Human judges also pointed out difficulties with the evaluation of long sentences.
0
To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
47 78.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
If two phrases can be used to express the same relationship within an information extraction application (“scenario”), these two phrases are paraphrases.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We start with noun features since written Arabic contains a very high proportion of NPs.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The resulting algorithm has a complexity of O(n!).
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Due to many similarly performing systems, we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Phrase tables were extracted from the IN and OUT training corpora (not the dev as was used for instance weighting models), and phrase pairs in the intersection of the IN and OUT phrase tables were used as positive examples, with two alternate definitions of negative examples: The classifier trained using the 2nd definition had higher accuracy on a development set.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-.
A beam search concept is applied as in speech recognition.
0
Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The data is sorted based on the frequency of the context (“a unit of” appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. “NBC” and “General Electric Co.” appeared 10 times with the context “a unit of”).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In the “Person – Person” domain, 618 keywords are found, and in the “Country – Country” domain, 303 keywords are found.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This is a rather important source of errors in name identifi­ cation, and it is not really possible to objectively evaluate a name recognition system without considering the main lexicon with which it is used.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Pairwise comparison is done using the sign test.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
If is a number string (such as one, two, etc), then the feature NumberString is set to 1.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
we perform five runs with different random initialization of sampling state.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.
Two general approaches are presented and two combination techniques are described for each approach.
0
This is not an oversight.
The corpus was annoted with different linguitic information.
0
As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graph’s structures — cf.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
However, our full model takes advantage of word features not present in Grac¸a et al.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
As can be seen, GR and this "pared-down" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
0 57.3 51.
They have made use of local and global features to deal with the instances of same token in a document.
1
Local features are features that are based on neighboring tokens, as well as the token itself.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Finally, we add “DT” to the tags for definite nouns and adjectives (Kulick et al., 2006).
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
See Section 5.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Consequently, all three parsers prefer the nominal reading.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Hence we decided to restrict ourselves to only information from the same document.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Future work should also extend the approach to build a complete named entity extractor - a method that pulls proper names from text and then classifies them.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
GL is then used to parse the string tn1 ... tnk_1, where tni is a terminal corresponding to the lattice span between node ni and ni+1.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Sometimes, however, these beliefs can be contradictory.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Se ma nti c (a) filters candidate if its semantic tags d o n ’ t i n t e r s e c t w i t h t h o s e o f t h e a n a p h o r .
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For each pair of judges consider one judge as the standard,.
The texts were annotated with the RSTtool.
0
In the rhetorical tree, nuclearity information is then used to extract a “kernel tree” that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary).
This assumption, however, is not inherent to type-based tagging models.
0
Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Hence, we use the bootstrap resampling method described by Koehn (2004).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
na me =>1 ha nzi fa mi ly 1 ha nzi gi ve n 4.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Formalisms such as the restricted indexed grammars (Gazdar, 1985) and members of the hierarchy of grammatical systems given by Weir (1987) have independent paths, but more complex path sets.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Initially, we planned to compare the semantic classes of an anaphor and a candidate and infer that they might be coreferent if their semantic classes intersected.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The major problem for our seg­ menter, as for all segmenters, remains the problem of unknown words (see Fung and Wu [1994]).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Conditioned on T , features of word types W are drawn.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Restrictions We compare our new approach with the word reordering used in the IBM translation approach (Berger et al., 1996).
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
The first thing to note is that projectivizing helps in itself, even if no encoding is used, as seen from the fact that the projective baseline outperforms the non-projective training condition by more than half a percentage point on attachment score, although the gain is much smaller with respect to exact match.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
An example is in (i), where the system fails to group t;,f;?"$?t!: lin2yang2gang3 as a name, because all three hanzi can in principle be separate words (t;,f; lin2 'wood';?"$ yang2 'ocean'; ?t!; gang3 'harbor').
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
We focus on phrases which connect two Named Entities (NEs), and proceed in two stages.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
2.6 Co-reference.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
But Arabic contains a variety of linguistic phenomena unseen in English.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The Prague Dependency Treebank (PDT) consists of more than 1M words of newspaper text, annotated on three levels, the morphological, analytical and tectogrammatical levels (Hajiˇc, 1998).
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The accuracy is calculated as the ratio of the number of paraphrases to the total number of phrases in the set.
There is no global pruning.
0
f;g denotes the empty set, where no source sentence position is covered.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
( b ) s u p p o r t s c a n d i d a t e i f s e l e c t e d s e m a n t i c t a g s m a t c h t h o s e o f t h e a n a p h o r . Le xic al computes degree of lexical overlap b e t w e e n t h e c a n d i d a t e a n d t h e a n a p h o r . Re cen cy computes the relative distance between the c a n d i d a t e a n d t h e a n a p h o r . Sy nR ole computes relative frequency with which the c a n d i d a t e ’ s s y n t a c t i c r o l e o c c u r s i n r e s o l u t i o n s . Figure 4: General Knowledge Sources The Lexical KS returns 1 if the candidate and anaphor are identical, 0.5 if their head nouns match, and 0 otherwise.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
(2010) and the posterior regular- ization HMM of Grac¸a et al.
There are clustering approaches that assign a single POS tag to each word type.
0
One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
the number of permutations carried out for the word reordering is given.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Its only purpose is 3 This follows since each θt has St − 1 parameters and.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Various segmentation approaches were then compared with human performance: 1.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Domain adaptation is a common concern when optimizing empirical NLP applications.
Two general approaches are presented and two combination techniques are described for each approach.
0
It was then tested on section 22 of the Treebank in conjunction with the other parsers.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Evaluating parsing results in our joint framework, as argued by Tsarfaty (2006), is not trivial under the joint disambiguation task, as the hypothesized yield need not coincide with the correct one.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
4.4 Chinese Personal Names.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
In total 13,976 phrases are assigned to sets of phrases, and the accuracy on our evaluation data ranges from 65 to 99%, depending on the domain and the size of the sets.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, the company may refer to Company X in one paragraph and Company Y in another.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010).
The AdaBoost algorithm was developed for supervised learning.
0
The question of what soft function to pick, and how to design' algorithms which optimize it, is an open question, but appears to be a promising way of looking at the problem.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Table 8a shows that the best model recovers SBAR at only 71.0% F1.
This assumption, however, is not inherent to type-based tagging models.
0
Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Collisions between two keys in the table can be identified at model building time.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Surprisingly, this effect is much less obvious for out-of-domain test data.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The other half was replaced by other participants, so we ended up with roughly the same number.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies.
This paper conducted research in the area of automatic paraphrase discovery.
0
x EG, has agreed to be bought by H x EG, now owned by H x H to acquire EG x H’s agreement to buy EG Three of those phrases are actually paraphrases, but sometime there could be some noise; such as the second phrase above.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The 13 positions of the source sentence are processed in the order shown.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
As we shall argue, the semantic class affiliation of a hanzi constitutes useful information in predicting its properties.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Each set is assigned two values: belief and plausibility.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
3.2 The DempsterShafer Decision Model.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Here, the term frequency (TF) is the frequency of a word in the bag and the inverse term frequency (ITF) is the inverse of the log of the frequency in the entire corpus.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
As we shall see, most of the linked sets are paraphrases.
A beam search concept is applied as in speech recognition.
0
Subsets C of increasing cardinality c are processed.
This assumption, however, is not inherent to type-based tagging models.
0
As is standard, we use a fixed constant K for the number of tagging states.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
10) and trained both EM and L-BFGS for 1000 iterations.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In IE, creating the patterns which express the requested scenario, e.g. “management succession” or “corporate merger and acquisition” is regarded as the hardest task.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The effect of a second reference resolution classifier is not entirely the same as that of global features.