source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
More examples are shown in Figure 5.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.
Their results show that their high performance NER use less training data than other systems.
0
In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We can define derivation trees inductively on the length of the derivation of a tree 1.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Finally, we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Nu mb er filters candidate if number doesn’t agree.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
It is worth noting that, although nonprojective constructions are less frequent in DDT than in PDT, they seem to be more deeply nested, since only about 80% can be projectivized with a single lift, while almost 95% of the non-projective arcs in PDT only require a single lift.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
And time is short.
Combining multiple highly-accurate independent parsers yields promising results.
0
The first shows how constituent features and context do not help in deciding which parser to trust.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
collected too.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The matching sentence pairs are then added to the IN corpus, and the system is re-trained.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
5 Related Work.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi­ cal rule-based approaches, and approaches that combine lexical information with sta­ tistical information.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
We apply a beam search concept as in speech recognition.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Link phrases based on instance pairs Using NE instance pairs as a clue, we find links between sets of phrases.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.
There is no global pruning.
0
(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !
Combining multiple highly-accurate independent parsers yields promising results.
0
All four of the techniques studied result in parsing systems that perform better than any previously reported.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji..
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
To optimize left-to-right queries, we extend state to store backoff information: where m is the minimal context from Section 4.1 and b is the backoff penalty.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
In our third model GTppp we also add the distinction between general PPs and possessive PPs following Goldberg and Elhadad (2007).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
There exist a few robust broad-coverage parsers that produce non-projective dependency structures, notably Tapanainen and J¨arvinen (1997) and Wang and Harper (2004) for English, Foth et al. (2004) for German, and Holan (2004) for Czech.
It is probably the first analysis of Arabic parsing of this kind.
0
It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(See Sproat and Shih 1995.)
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
We evaluated the results based on two metrics.
There is no global pruning.
0
The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The role that each noun phrase plays in the kidnapping event is key to distinguishing these cases.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Maamouri et al.
There is no global pruning.
0
A summary of the corpus used in the experiments is given in Table 3.
Here we present two algorithms.
0
In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Compared to the EMEA/EP setting, the two domains in the NIST setting are less homogeneous and more similar to each other; there is also considerably more IN text available.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
This research was supported in part by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001001-18917 from the Space and Naval Warfare Systems Center, San Diego, and by the National Science Foundation under Grant IIS00325657.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.
Combining multiple highly-accurate independent parsers yields promising results.
0
For each experiment we gave an nonparametric and a parametric technique for combining parsers.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _..
This paper conducted research in the area of automatic paraphrase discovery.
0
We would like to thank Prof. Ralph Grish- man, Mr. Takaaki Hasegawa and Mr. Yusuke Shinyama for useful comments, discussion and evaluation.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
basically complete, yet some improvements and extensions are still under way.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
G1 and G2 are hanzi, we can estimate the probability of the sequence being a name as the product of: • the probability that a word chosen randomly from a text will be a name-p(rule 1), and • the probability that the name is of the form 1hanzi-family 2hanzi-given-p(rule 2), and • the probability that the family name is the particular hanzi F1-p(rule 6), and • the probability that the given name consists of the particular hanzi G1 and G2-p(rule 9) This model is essentially the one proposed in Chang et al.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Prague Dependency Treebank (Hajiˇc et al., 2001b), Danish Dependency Treebank (Kromann, 2003), and the METU Treebank of Turkish (Oflazer et al., 2003), which generally allow annotations with nonprojective dependency structures.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
97 78.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
First, it directly encodes linguistic intuitions about POS tag assignments: the model structure reflects the one-tag-per-word property, and a type- level tag prior captures the skew on tag assignments (e.g., there are fewer unique determiners than unique nouns).
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Simple Type-Level Unsupervised POS Tagging
Their results show that their high performance NER use less training data than other systems.
0
MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
For example, we might have VP → VB NP PP, where the NP is the subject.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
For a language like English, this problem is generally regarded as trivial since words are delimited in English text by whitespace or marks of punctuation.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
One of the strengths of the DempsterShafer model is its natural ability to recognize when several credible hypotheses are still in play.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
One way to approach this discrepancy is to assume a preceding phase of morphological segmentation for extracting the different lexical items that exist at the token level (as is done, to the best of our knowledge, in all parsing related work on Arabic and its dialects (Chiang et al., 2006)).
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
For example, if {N P1, N P2, N P3} are all coreferent, then each NP must be linked to one of the other two NPs.
This assumption, however, is not inherent to type-based tagging models.
0
Once the lexicon has been drawn, the model proceeds similarly to the standard token-level HMM: Emission parameters θ are generated conditioned on tag assignments T . We also draw transition parameters φ.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Finally, we wish to reiterate an important point.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
We have presented two general approaches to studying parser combination: parser switching and parse hybridization.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Table 2 shows BABAR’s performance.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
3 68.4 49.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The 1st block contains the simple baselines from section 2.1.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
They have made use of local and global features to deal with the instances of same token in a document.
0
However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Timing is based on plentiful memory.
Combining multiple highly-accurate independent parsers yields promising results.
0
F-measure is the harmonic mean of precision and recall, 2PR/(P + R).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Each spawned process must check if xi , , xn, and , yn, can be derived from B and C, respectively.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
In our third model GTppp we also add the distinction between general PPs and possessive PPs following Goldberg and Elhadad (2007).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1.
They found replacing it with a ranked evaluation to be more suitable.
0
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
It is interesting to note, however, that the ability to produce a bounded number of dependent paths (where two dependent paths can share an unbounded amount of information) does not require machinery as powerful as that used in LFG, FUG and IG's.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
1
Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
2 for the accuracy of the different methods.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
As in the case of the derivation trees of CFG's, nodes are labeled by a member of some finite set of symbols (perhaps only implicit in the grammar as in TAG's) used to denote derived structures.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
This algorithm can be applied to statistical machine translation.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
A beam search concept is applied as in speech recognition.
0
2.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003).
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This heuristics is used to prune all segmentation possibilities involving “lexically improper” segments.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Local features are features that are based on neighboring tokens, as well as the token itself.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Denote the unthresholded classifiers after t — 1 rounds by git—1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixed.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
A contextual role represents the role that a noun phrase plays in an event or relationship.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.