source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
The procedure using the tagged sentences to discover paraphrases takes about one hour on a 2GHz Pentium 4 PC with 1GB of memory.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Table 9: Dev set results for sentences of length ≤ 70.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The types of patterns produced by AutoSlog are outlined in (Riloff, 1996).
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Although these existential NPs do not need a prior referent, they may occur multiple times in a document.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Sie.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuter’s text collection8 that had a subject code corresponding to natural disasters.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie ware es denn am ahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about ahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten konnten , ware das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The way we cant distinction between system performance.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
2.5 Connectives with scopes.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Overall, language modeling significantly impacts decoder performance.
The AdaBoost algorithm was developed for supervised learning.
0
We again assume a training set of n examples {x1 . xri} where the first m examples have labels {y1 ... yin}, and the last (n — m) examples are unlabeled.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
All the links in the “CC-domain are shown in Step 4 in subsection 3.2.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
Combining multiple highly-accurate independent parsers yields promising results.
0
While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Given an anaphor, BABAR identifies the caseframe that would extract it from its sentence.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.
There are clustering approaches that assign a single POS tag to each word type.
0
This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
We use label propagation in two stages to generate soft labels on all the vertices in the graph.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Denote the unthresholded classifiers after t — 1 rounds by git—1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixed.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
We create equivalence classes for verb, noun, and adjective POS categories.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Figure 1 reveals that an event that “damaged” objects may also cause injuries; a disaster that “occurred” may be investigated to find its “cause”; a disaster may “wreak” havoc as it “crosses” geographic regions; and vehicles that have a “driver” may also “carry” items.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
• We evaluated translation from English, in addition to into English.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.
The corpus was annoted with different linguitic information.
0
Like in the co-reference annotation, G¨otze’s proposal has been applied by two annotators to the core corpus but it has not been systematically evaluated yet.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
73 81.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We use a simple TF/IDF method to measure the topicality of words.
Here both parametric and non-parametric models are explored.
0
We used these three parsers to explore parser combination techniques.
This paper talks about Pseudo-Projective Dependency Parsing.
0
Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\xiao3jiang4 'little general.'
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Our System Wang, Li, and Chang a. 1\!f!IP Eflltii /1\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. "*:t: w _t ff 1 "* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 'music by Chen Zhongshen ' huang2rong2 youlyoul de dao4 'Huang Rong said soberly' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 'after the county president You Qing had assumed the position' lin2 quan2 'Lin Quan' wang2jian4 'Wang Jian' oulyang2-ke4 'Ouyang Ke' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 'because it cannot permit Taiwan Independence so' silfa3-yuan4zhang3 lin2yang2-gang3 'president of the Judicial Yuan, Lin Yanggang' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol 'Lin Zhanghu will give an ex­ planation live' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 'in two years the distributed money will stop' gaoltangl da4chi2 ye1zi0 fen3 'chicken stock, a tablespoon of coconut flakes' you2qingl ru4zhu3 xian4fu3 lwu4 'after You Qing headed the county government' Table 5 Performance on morphological analysis.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.
This paper talks about Pseudo-Projective Dependency Parsing.
0
The rest of the paper is structured as follows.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The linear and nonerasing assumptions about the operations discussed in Section 4.1 require that each z, and yk is used exactly once to define the strings zi, ,z1,3.
All the texts were annotated by two people.
0
And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
In words, the judgements are normalized, so that the average normalized judgement per judge is 3.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Statistical methods seem particularly applicable to the problem of unknown-word identification, especially for constructions like names, where the linguistic constraints are minimal, and where one therefore wants to know not only that a particular se­ quence of hanzi might be a name, but that it is likely to be a name with some probabil­ ity.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
The sentences in the corpus were tagged by a transformation-based chunker and an NE tagger.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
1 74.5 56.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Intuitively, it places more weight on OUT when less evidence from IN is available.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Once we figure out the important word (e.g. keyword), we believe we can capture the meaning of the phrase by the keyword.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
On each step CoBoost searches for a feature and a weight so as to minimize either 40 or 40.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
For each set, the phrases with bracketed frequencies are considered not paraphrases in the set.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
While Berg-Kirkpatrick et al.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We extend the Matsoukas et al approach in several ways.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
These tag distributions are used to initialize the label distributions over the English vertices in the graph.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
With the additional assumptions, inspired by Rounds (1985), we can show that members of this class can be recognized in polynomial time.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
This is especially true in the case of quotations—which are common in the ATB—where (1) will follow a verb like (2) (Figure 1).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Lack of correct reference translations was pointed out as a short-coming of our evaluation.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
This aspect of the formalism is both linguistically and computationally important.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
For the automatic evaluation, we used BLEU, since it is the most established metric in the field.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The dictionary sizes reported in the literature range from 17,000 to 125,000 entries, and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches, possibly more important than the particular set of methods used in the segmentation.
A beam search concept is applied as in speech recognition.
0
The resulting algorithm is depicted in Table 1.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Methods that allow multiple segmentations must provide criteria for choosing the best segmentation.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Many researchers have developed coreference resolvers, so we will only discuss the methods that are most closely related to BABAR.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
1
Better grammars are shown here to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
The AdaBoost algorithm was developed for supervised learning.
0
Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
The Prague Dependency Treebank (PDT) consists of more than 1M words of newspaper text, annotated on three levels, the morphological, analytical and tectogrammatical levels (Hajiˇc, 1998).
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The set is then compared with the set generated from the Penn Treebank parse to determine the precision and recall.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words.
Their results show that their high performance NER use less training data than other systems.
0
Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories).
Their results show that their high performance NER use less training data than other systems.
0
Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The Potsdam Commentary Corpus
They have made use of local and global features to deal with the instances of same token in a document.
0
All our results are obtained by using only the official training data provided by the MUC conferences.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Independence of paths at this level reflects context freeness of rewriting and suggests why they can be recognized efficiently.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Other approaches encode sparsity as a soft constraint.
All the texts were annotated by two people.
0
Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
A moment's reflection will reveal that things are not quite that simple.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
As suggested at the end of Section 3, the restrictions that have been specified in the definition of LCFRS's suggest that they can be efficiently recognized.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The segmenter will give both analyses :1 cai2 neng2 'just be able,' and ?]cai2neng2 'talent,' but the latter analysis is preferred since splitting these two morphemes is generally more costly than grouping them.
They have made use of local and global features to deal with the instances of same token in a document.
0
Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
However, 1 http://maxent.sourceforge.net 3.2 Testing.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
This is summarized in Equation 5.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction.
This corpus has several advantages: it is annotated at different levels.
0
And indeed, converging on annotation guidelines is even more difficult than it is with co-reference.