source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They have made use of local and global features to deal with the instances of same token in a document.
0
If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Te rro ris m Na tur al Dis ast ers mu rde r of < NP > kill ed <p atie nt > <a ge nt > da ma ged wa s inj ure d in < NP > <a ge nt > rep ort ed <a ge nt > add ed <a ge nt > occ urr ed cau se of < NP > <a ge nt > stat ed <a ge nt > add ed <a ge nt > wr eak ed <a ge nt > cro sse d per pet rat ed <p atie nt > con de mn ed <p atie nt > dri ver of < NP > <a ge nt > car ryi ng Figure 1: Caseframe Network Examples Figure 1 shows examples of caseframes that co-occur in resolutions, both in the terrorism and natural disaster domains.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
This result suggests the benefit of using the automatic discovery method.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
1
They cluster NE instance pairs based on the words in the contexts using a bag- of-words method.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
), and thosethat begin with a verb (� ub..i �u _..
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We show that noun-noun vs. discourse-level coordination ambiguity in Arabic is a significant source of parsing errors (Table 8c).
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
As can be seen, GR and this "pared-down" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.
Two general approaches are presented and two combination techniques are described for each approach.
0
The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Now, for this application one might be tempted to simply bypass the segmentation problem and pronounce the text character-by-character.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
13.
The texts were annotated with the RSTtool.
0
(Webber et al., 2003)).
Here both parametric and non-parametric models are explored.
0
We call this technique constituent voting.
The AdaBoost algorithm was developed for supervised learning.
0
For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In the case of adverbial reduplication illustrated in (3b) an adjective of the form AB is reduplicated as AABB.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
For the purposes of EM, the &quot;observed&quot; data is {(xi, Ya• • • (xrn, Yrn), xfil, and the hidden data is {ym+i y}.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
We consider two variants of Berg-Kirkpatrick et al.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.
This paper talks about Unsupervised Models for Named Entity Classification.
0
We excluded these from the evaluation as they can be easily identified with a list of days/months.
All the texts were annotated by two people.
0
We will briefly discuss this point in Section 3.1.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase.
They found replacing it with a ranked evaluation to be more suitable.
0
Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Lexicon and OOV Handling Our data-driven morphological-analyzer proposes analyses for unknown tokens as described in Section 5.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
7 Unlike Dickinson (2005), we strip traces and only con-.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
KS Function Ge nde r filters candidate if gender doesn’t agree.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For example, suppose one is building a ITS system for Mandarin Chinese.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.
This corpus has several advantages: it is annotated at different levels.
0
And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
For the experiments, we use a simple preprocessing step.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
On several languages, we report performance exceeding that of state-of-the art systems.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
This model is easily incorporated into the segmenter by building a WFST restrict­ ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We also mark all nodes that dominate an SVO configuration (containsSVO).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
For queries, we uniformly sampled 10 million hits and 10 million misses.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
There has also been some work on adapting the word alignment model prior to phrase extraction (Civera and Juan, 2007; Wu et al., 2005), and on dynamically choosing a dev set (Xu et al., 2007).
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Instead, we focused on phrases and set the frequency threshold to 2, and so were able to utilize a lot of phrases while minimizing noise.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
More details on the parsing algorithm can be found in Nivre (2003).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We attempt to formalize this notion in terms of the tee pumping lemma which can be used to show that a tee set does not have dependent paths.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
computing the recall of the other's judgments relative to this standard.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
We concentrate on those sets.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.
All the texts were annotated by two people.
0
Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at one’s disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The way we cant distinction between system performance.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
One obvious application is information extraction.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
can expect famous names like Zhou Enlai's to be in many dictionaries, but names such as :fi lf;f; shi2jil-lin2, the name of the second author of this paper, will not be found in any dictionary.
Their results show that their high performance NER use less training data than other systems.
0
of Articles No.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Before explaining our method in detail, we present a brief overview in this subsection.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
(7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
gaolxing4 'happy'=> F.i'JF.i'JJI!JI!
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
72 78.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
A few annotators suggested to break up long sentences into clauses and evaluate these separately.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
1 61.7 37.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
HR0011-06-C-0022.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Precision.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
A high-level relation is agent, which relates an animate nominal to a predicate.
There are clustering approaches that assign a single POS tag to each word type.
0
Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.
The corpus was annoted with different linguitic information.
0
Still, for both human and automatic rhetorical analysis, connectives are the most important source of surface information.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Fortunately, we were able to obtain a copy of the full set of sentences from Chang et al. on which Wang, Li, and Chang tested their system, along with the output of their system.18 In what follows we will discuss all cases from this set where our performance on names differs from that of Wang, Li, and Chang.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
D o m ai n Li n k ac cu ra cy W N c o v e r a g e C C 7 3 . 3 % 2 / 1 1 P C 8 8 . 9 % 2 / 8 Table 2.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Sentence pairs are the natural instances for SMT, but sentences often contain a mix of domain-specific and general language.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Corpus Step 1 NE pair instances Step 2 Step 1.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
While the proportion of sentences containing non-projective dependencies is often 15–25%, the total proportion of non-projective arcs is normally only 1–2%.
The texts were annotated with the RSTtool.
0
The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide.
There are clustering approaches that assign a single POS tag to each word type.
0
(2009).
They found replacing it with a ranked evaluation to be more suitable.
0
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
A cell in the bottom row of the parse chart is required for each potential whitespace boundary.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Although matching is done at the sentence level, this information is subsequently discarded when all matches are pooled.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Our analysis identifies three key factors driving our performance gain: 1) selecting a model structure which directly encodes tag sparsity, 2) a type-level prior on tag assignments, and 3) a straightforward na¨ıveBayes approach to incorporate features.
This paper conducted research in the area of automatic paraphrase discovery.
0
shows some keywords with their scores.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For each co-occurrence relation (noun/caseframe for CFLex, and caseframe/caseframe for CFNet), BABAR computes its log-likelihood value and looks it up in the χ2 table to obtain a confidence level.
Here we present two algorithms.
0
The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
They are set to fixed constants.
Their results show that their high performance NER use less training data than other systems.
0
However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Sometime, multiple words are needed, like “vice chairman”, “prime minister” or “pay for” (“pay” and “pay for” are different senses in the CC-domain).
This paper conducted research in the area of automatic paraphrase discovery.
0
The data is sorted based on the frequency of the context (“a unit of” appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. “NBC” and “General Electric Co.” appeared 10 times with the context “a unit of”).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
This design does not guarantee “structural zeros,” but biases towards sparsity.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
To quantize, we use the binning method (Federico and Bertoldi, 2006) that sorts values, divides into equally sized bins, and averages within each bin.